00:00:00.001 Started by upstream project "autotest-per-patch" build number 132427 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.050 The recommended git tool is: git 00:00:00.050 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.074 Fetching changes from the remote Git repository 00:00:00.076 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.098 Using shallow fetch with depth 1 00:00:00.098 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.098 > git --version # timeout=10 00:00:00.122 > git --version # 'git version 2.39.2' 00:00:00.122 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.135 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.135 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.382 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.395 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.408 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.408 > git config core.sparsecheckout # timeout=10 00:00:04.420 > git read-tree -mu HEAD # timeout=10 00:00:04.437 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.457 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.457 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.571 [Pipeline] Start of Pipeline 00:00:04.585 [Pipeline] library 00:00:04.587 Loading library shm_lib@master 00:00:04.587 Library shm_lib@master is cached. Copying from home. 00:00:04.602 [Pipeline] node 00:00:04.621 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:04.623 [Pipeline] { 00:00:04.632 [Pipeline] catchError 00:00:04.633 [Pipeline] { 00:00:04.646 [Pipeline] wrap 00:00:04.654 [Pipeline] { 00:00:04.664 [Pipeline] stage 00:00:04.666 [Pipeline] { (Prologue) 00:00:04.687 [Pipeline] echo 00:00:04.689 Node: VM-host-SM38 00:00:04.695 [Pipeline] cleanWs 00:00:04.705 [WS-CLEANUP] Deleting project workspace... 00:00:04.705 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.711 [WS-CLEANUP] done 00:00:04.890 [Pipeline] setCustomBuildProperty 00:00:04.984 [Pipeline] httpRequest 00:00:05.691 [Pipeline] echo 00:00:05.693 Sorcerer 10.211.164.20 is alive 00:00:05.700 [Pipeline] retry 00:00:05.702 [Pipeline] { 00:00:05.714 [Pipeline] httpRequest 00:00:05.718 HttpMethod: GET 00:00:05.718 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.719 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.728 Response Code: HTTP/1.1 200 OK 00:00:05.729 Success: Status code 200 is in the accepted range: 200,404 00:00:05.729 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.643 [Pipeline] } 00:00:08.661 [Pipeline] // retry 00:00:08.669 [Pipeline] sh 00:00:09.036 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.052 [Pipeline] httpRequest 00:00:09.414 [Pipeline] echo 00:00:09.415 Sorcerer 10.211.164.20 is alive 00:00:09.426 [Pipeline] retry 00:00:09.428 [Pipeline] { 00:00:09.444 [Pipeline] httpRequest 00:00:09.449 HttpMethod: GET 00:00:09.450 URL: http://10.211.164.20/packages/spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz 00:00:09.451 Sending request to url: http://10.211.164.20/packages/spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz 00:00:09.467 Response Code: HTTP/1.1 200 OK 00:00:09.467 Success: Status code 200 is in the accepted range: 200,404 00:00:09.468 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz 00:01:07.211 [Pipeline] } 00:01:07.230 [Pipeline] // retry 00:01:07.239 [Pipeline] sh 00:01:07.522 + tar --no-same-owner -xf spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz 00:01:10.842 [Pipeline] sh 00:01:11.125 + git -C spdk log --oneline -n5 00:01:11.125 5c8d99223 bdev: Factor out checking bounce buffer necessity into helper function 00:01:11.125 d58114851 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:01:11.125 32c3f377c bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:01:11.125 d3dfde872 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:01:11.125 b6a8866f3 bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:01:11.144 [Pipeline] writeFile 00:01:11.159 [Pipeline] sh 00:01:11.447 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:11.459 [Pipeline] sh 00:01:11.742 + cat autorun-spdk.conf 00:01:11.742 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.742 SPDK_TEST_NVME=1 00:01:11.742 SPDK_TEST_FTL=1 00:01:11.742 SPDK_TEST_ISAL=1 00:01:11.742 SPDK_RUN_ASAN=1 00:01:11.742 SPDK_RUN_UBSAN=1 00:01:11.742 SPDK_TEST_XNVME=1 00:01:11.742 SPDK_TEST_NVME_FDP=1 00:01:11.742 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.749 RUN_NIGHTLY=0 00:01:11.751 [Pipeline] } 00:01:11.767 [Pipeline] // stage 00:01:11.781 [Pipeline] stage 00:01:11.783 [Pipeline] { (Run VM) 00:01:11.797 [Pipeline] sh 00:01:12.081 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:12.081 + echo 'Start stage prepare_nvme.sh' 00:01:12.081 Start stage prepare_nvme.sh 00:01:12.081 + [[ -n 4 ]] 00:01:12.081 + disk_prefix=ex4 00:01:12.081 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:01:12.081 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:01:12.081 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:01:12.081 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.081 ++ SPDK_TEST_NVME=1 00:01:12.081 ++ SPDK_TEST_FTL=1 00:01:12.081 ++ SPDK_TEST_ISAL=1 00:01:12.081 ++ SPDK_RUN_ASAN=1 00:01:12.081 ++ SPDK_RUN_UBSAN=1 00:01:12.081 ++ SPDK_TEST_XNVME=1 00:01:12.081 ++ SPDK_TEST_NVME_FDP=1 00:01:12.081 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.081 ++ RUN_NIGHTLY=0 00:01:12.081 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:01:12.081 + nvme_files=() 00:01:12.081 + declare -A nvme_files 00:01:12.081 + backend_dir=/var/lib/libvirt/images/backends 00:01:12.081 + nvme_files['nvme.img']=5G 00:01:12.081 + nvme_files['nvme-cmb.img']=5G 00:01:12.081 + nvme_files['nvme-multi0.img']=4G 00:01:12.081 + nvme_files['nvme-multi1.img']=4G 00:01:12.081 + nvme_files['nvme-multi2.img']=4G 00:01:12.081 + nvme_files['nvme-openstack.img']=8G 00:01:12.081 + nvme_files['nvme-zns.img']=5G 00:01:12.081 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:12.081 + (( SPDK_TEST_FTL == 1 )) 00:01:12.081 + nvme_files["nvme-ftl.img"]=6G 00:01:12.081 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:12.081 + nvme_files["nvme-fdp.img"]=1G 00:01:12.081 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:12.081 + for nvme in "${!nvme_files[@]}" 00:01:12.081 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:12.081 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.081 + for nvme in "${!nvme_files[@]}" 00:01:12.081 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:12.342 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:12.342 + for nvme in "${!nvme_files[@]}" 00:01:12.342 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:12.342 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.342 + for nvme in "${!nvme_files[@]}" 00:01:12.342 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:12.342 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:12.342 + for nvme in "${!nvme_files[@]}" 00:01:12.342 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:12.342 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.342 + for nvme in "${!nvme_files[@]}" 00:01:12.342 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:12.342 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.342 + for nvme in "${!nvme_files[@]}" 00:01:12.342 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:12.603 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.603 + for nvme in "${!nvme_files[@]}" 00:01:12.603 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:12.603 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:12.603 + for nvme in "${!nvme_files[@]}" 00:01:12.603 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:13.172 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.172 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:13.172 + echo 'End stage prepare_nvme.sh' 00:01:13.172 End stage prepare_nvme.sh 00:01:13.183 [Pipeline] sh 00:01:13.466 + DISTRO=fedora39 00:01:13.466 + CPUS=10 00:01:13.466 + RAM=12288 00:01:13.466 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:13.466 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:13.466 00:01:13.466 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:01:13.466 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:01:13.466 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:01:13.466 HELP=0 00:01:13.466 DRY_RUN=0 00:01:13.466 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:13.466 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:13.466 NVME_AUTO_CREATE=0 00:01:13.466 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:13.466 NVME_CMB=,,,, 00:01:13.466 NVME_PMR=,,,, 00:01:13.466 NVME_ZNS=,,,, 00:01:13.466 NVME_MS=true,,,, 00:01:13.466 NVME_FDP=,,,on, 00:01:13.466 SPDK_VAGRANT_DISTRO=fedora39 00:01:13.466 SPDK_VAGRANT_VMCPU=10 00:01:13.466 SPDK_VAGRANT_VMRAM=12288 00:01:13.466 SPDK_VAGRANT_PROVIDER=libvirt 00:01:13.466 SPDK_VAGRANT_HTTP_PROXY= 00:01:13.466 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:13.466 SPDK_OPENSTACK_NETWORK=0 00:01:13.466 VAGRANT_PACKAGE_BOX=0 00:01:13.466 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:13.466 FORCE_DISTRO=true 00:01:13.466 VAGRANT_BOX_VERSION= 00:01:13.466 EXTRA_VAGRANTFILES= 00:01:13.466 NIC_MODEL=e1000 00:01:13.466 00:01:13.466 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:01:13.466 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:16.012 Bringing machine 'default' up with 'libvirt' provider... 00:01:16.274 ==> default: Creating image (snapshot of base box volume). 00:01:16.534 ==> default: Creating domain with the following settings... 00:01:16.534 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732123899_64d130492f528781dfeb 00:01:16.534 ==> default: -- Domain type: kvm 00:01:16.534 ==> default: -- Cpus: 10 00:01:16.534 ==> default: -- Feature: acpi 00:01:16.534 ==> default: -- Feature: apic 00:01:16.534 ==> default: -- Feature: pae 00:01:16.534 ==> default: -- Memory: 12288M 00:01:16.534 ==> default: -- Memory Backing: hugepages: 00:01:16.534 ==> default: -- Management MAC: 00:01:16.534 ==> default: -- Loader: 00:01:16.534 ==> default: -- Nvram: 00:01:16.534 ==> default: -- Base box: spdk/fedora39 00:01:16.534 ==> default: -- Storage pool: default 00:01:16.534 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732123899_64d130492f528781dfeb.img (20G) 00:01:16.534 ==> default: -- Volume Cache: default 00:01:16.534 ==> default: -- Kernel: 00:01:16.534 ==> default: -- Initrd: 00:01:16.534 ==> default: -- Graphics Type: vnc 00:01:16.534 ==> default: -- Graphics Port: -1 00:01:16.534 ==> default: -- Graphics IP: 127.0.0.1 00:01:16.534 ==> default: -- Graphics Password: Not defined 00:01:16.534 ==> default: -- Video Type: cirrus 00:01:16.534 ==> default: -- Video VRAM: 9216 00:01:16.534 ==> default: -- Sound Type: 00:01:16.534 ==> default: -- Keymap: en-us 00:01:16.534 ==> default: -- TPM Path: 00:01:16.534 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:16.534 ==> default: -- Command line args: 00:01:16.534 ==> default: -> value=-device, 00:01:16.534 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:16.534 ==> default: -> value=-drive, 00:01:16.534 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:16.534 ==> default: -> value=-device, 00:01:16.534 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:16.534 ==> default: -> value=-device, 00:01:16.534 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:16.534 ==> default: -> value=-drive, 00:01:16.534 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:16.534 ==> default: -> value=-device, 00:01:16.534 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.534 ==> default: -> value=-device, 00:01:16.534 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:16.534 ==> default: -> value=-drive, 00:01:16.534 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:16.534 ==> default: -> value=-device, 00:01:16.534 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.534 ==> default: -> value=-drive, 00:01:16.534 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:16.534 ==> default: -> value=-device, 00:01:16.534 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.534 ==> default: -> value=-drive, 00:01:16.534 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:16.534 ==> default: -> value=-device, 00:01:16.534 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.534 ==> default: -> value=-device, 00:01:16.534 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:16.534 ==> default: -> value=-device, 00:01:16.534 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:16.534 ==> default: -> value=-drive, 00:01:16.534 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:16.534 ==> default: -> value=-device, 00:01:16.534 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:16.534 ==> default: Creating shared folders metadata... 00:01:16.794 ==> default: Starting domain. 00:01:18.712 ==> default: Waiting for domain to get an IP address... 00:01:36.852 ==> default: Waiting for SSH to become available... 00:01:36.852 ==> default: Configuring and enabling network interfaces... 00:01:38.766 default: SSH address: 192.168.121.232:22 00:01:38.766 default: SSH username: vagrant 00:01:38.766 default: SSH auth method: private key 00:01:40.679 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:47.261 ==> default: Mounting SSHFS shared folder... 00:01:48.646 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:48.646 ==> default: Checking Mount.. 00:01:49.601 ==> default: Folder Successfully Mounted! 00:01:49.601 00:01:49.601 SUCCESS! 00:01:49.601 00:01:49.601 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:49.601 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:49.601 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:49.601 00:01:49.699 [Pipeline] } 00:01:49.709 [Pipeline] // stage 00:01:49.717 [Pipeline] dir 00:01:49.717 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:49.718 [Pipeline] { 00:01:49.725 [Pipeline] catchError 00:01:49.726 [Pipeline] { 00:01:49.733 [Pipeline] sh 00:01:50.009 + vagrant ssh-config --host vagrant 00:01:50.009 + sed -ne '/^Host/,$p' 00:01:50.009 + tee ssh_conf 00:01:52.589 Host vagrant 00:01:52.589 HostName 192.168.121.232 00:01:52.589 User vagrant 00:01:52.590 Port 22 00:01:52.590 UserKnownHostsFile /dev/null 00:01:52.590 StrictHostKeyChecking no 00:01:52.590 PasswordAuthentication no 00:01:52.590 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:52.590 IdentitiesOnly yes 00:01:52.590 LogLevel FATAL 00:01:52.590 ForwardAgent yes 00:01:52.590 ForwardX11 yes 00:01:52.590 00:01:52.606 [Pipeline] withEnv 00:01:52.609 [Pipeline] { 00:01:52.623 [Pipeline] sh 00:01:52.909 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:52.909 source /etc/os-release 00:01:52.909 [[ -e /image.version ]] && img=$(< /image.version) 00:01:52.909 # Minimal, systemd-like check. 00:01:52.909 if [[ -e /.dockerenv ]]; then 00:01:52.909 # Clear garbage from the node'\''s name: 00:01:52.909 # agt-er_autotest_547-896 -> autotest_547-896 00:01:52.909 # $HOSTNAME is the actual container id 00:01:52.909 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:52.909 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:52.909 # We can assume this is a mount from a host where container is running, 00:01:52.909 # so fetch its hostname to easily identify the target swarm worker. 00:01:52.909 container="$(< /etc/hostname) ($agent)" 00:01:52.909 else 00:01:52.909 # Fallback 00:01:52.909 container=$agent 00:01:52.909 fi 00:01:52.909 fi 00:01:52.909 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:52.909 ' 00:01:52.923 [Pipeline] } 00:01:52.940 [Pipeline] // withEnv 00:01:52.948 [Pipeline] setCustomBuildProperty 00:01:52.965 [Pipeline] stage 00:01:52.967 [Pipeline] { (Tests) 00:01:52.985 [Pipeline] sh 00:01:53.269 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:53.548 [Pipeline] sh 00:01:53.836 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:53.852 [Pipeline] timeout 00:01:53.852 Timeout set to expire in 50 min 00:01:53.854 [Pipeline] { 00:01:53.868 [Pipeline] sh 00:01:54.153 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:54.414 HEAD is now at 5c8d99223 bdev: Factor out checking bounce buffer necessity into helper function 00:01:54.428 [Pipeline] sh 00:01:54.712 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:54.989 [Pipeline] sh 00:01:55.274 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:55.291 [Pipeline] sh 00:01:55.576 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:55.576 ++ readlink -f spdk_repo 00:01:55.576 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:55.576 + [[ -n /home/vagrant/spdk_repo ]] 00:01:55.576 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:55.576 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:55.576 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:55.576 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:55.576 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:55.576 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:55.576 + cd /home/vagrant/spdk_repo 00:01:55.576 + source /etc/os-release 00:01:55.576 ++ NAME='Fedora Linux' 00:01:55.576 ++ VERSION='39 (Cloud Edition)' 00:01:55.576 ++ ID=fedora 00:01:55.576 ++ VERSION_ID=39 00:01:55.576 ++ VERSION_CODENAME= 00:01:55.576 ++ PLATFORM_ID=platform:f39 00:01:55.576 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:55.576 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:55.576 ++ LOGO=fedora-logo-icon 00:01:55.576 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:55.576 ++ HOME_URL=https://fedoraproject.org/ 00:01:55.576 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:55.576 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:55.576 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:55.576 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:55.576 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:55.576 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:55.576 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:55.576 ++ SUPPORT_END=2024-11-12 00:01:55.576 ++ VARIANT='Cloud Edition' 00:01:55.576 ++ VARIANT_ID=cloud 00:01:55.576 + uname -a 00:01:55.576 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:55.576 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:56.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:56.410 Hugepages 00:01:56.410 node hugesize free / total 00:01:56.410 node0 1048576kB 0 / 0 00:01:56.410 node0 2048kB 0 / 0 00:01:56.410 00:01:56.410 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:56.410 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:56.410 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:56.410 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:56.410 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:56.410 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:56.410 + rm -f /tmp/spdk-ld-path 00:01:56.410 + source autorun-spdk.conf 00:01:56.410 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.410 ++ SPDK_TEST_NVME=1 00:01:56.410 ++ SPDK_TEST_FTL=1 00:01:56.410 ++ SPDK_TEST_ISAL=1 00:01:56.410 ++ SPDK_RUN_ASAN=1 00:01:56.410 ++ SPDK_RUN_UBSAN=1 00:01:56.410 ++ SPDK_TEST_XNVME=1 00:01:56.410 ++ SPDK_TEST_NVME_FDP=1 00:01:56.410 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.410 ++ RUN_NIGHTLY=0 00:01:56.410 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:56.410 + [[ -n '' ]] 00:01:56.410 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:56.410 + for M in /var/spdk/build-*-manifest.txt 00:01:56.410 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:56.410 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:56.410 + for M in /var/spdk/build-*-manifest.txt 00:01:56.410 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:56.410 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:56.410 + for M in /var/spdk/build-*-manifest.txt 00:01:56.410 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:56.410 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:56.410 ++ uname 00:01:56.410 + [[ Linux == \L\i\n\u\x ]] 00:01:56.410 + sudo dmesg -T 00:01:56.410 + sudo dmesg --clear 00:01:56.410 + dmesg_pid=5015 00:01:56.410 + sudo dmesg -Tw 00:01:56.410 + [[ Fedora Linux == FreeBSD ]] 00:01:56.410 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:56.410 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:56.410 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:56.410 + [[ -x /usr/src/fio-static/fio ]] 00:01:56.410 + export FIO_BIN=/usr/src/fio-static/fio 00:01:56.410 + FIO_BIN=/usr/src/fio-static/fio 00:01:56.410 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:56.410 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:56.410 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:56.410 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:56.410 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:56.410 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:56.410 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:56.410 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:56.410 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:56.671 17:32:19 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:56.671 17:32:19 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:56.671 17:32:19 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.671 17:32:19 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:56.671 17:32:19 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:56.671 17:32:19 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:56.671 17:32:19 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:56.671 17:32:19 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:56.671 17:32:19 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:56.671 17:32:19 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:56.671 17:32:19 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.671 17:32:19 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:56.671 17:32:19 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:56.671 17:32:19 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:56.671 17:32:20 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:56.671 17:32:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:56.671 17:32:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:56.671 17:32:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:56.671 17:32:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:56.671 17:32:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:56.671 17:32:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.671 17:32:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.671 17:32:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.671 17:32:20 -- paths/export.sh@5 -- $ export PATH 00:01:56.671 17:32:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.671 17:32:20 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:56.671 17:32:20 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:56.671 17:32:20 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732123940.XXXXXX 00:01:56.671 17:32:20 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732123940.9GnfoV 00:01:56.671 17:32:20 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:56.671 17:32:20 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:56.671 17:32:20 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:56.671 17:32:20 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:56.671 17:32:20 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:56.671 17:32:20 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:56.671 17:32:20 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:56.671 17:32:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.671 17:32:20 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:56.671 17:32:20 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:56.671 17:32:20 -- pm/common@17 -- $ local monitor 00:01:56.671 17:32:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.671 17:32:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.671 17:32:20 -- pm/common@25 -- $ sleep 1 00:01:56.671 17:32:20 -- pm/common@21 -- $ date +%s 00:01:56.671 17:32:20 -- pm/common@21 -- $ date +%s 00:01:56.671 17:32:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732123940 00:01:56.671 17:32:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732123940 00:01:56.671 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732123940_collect-cpu-load.pm.log 00:01:56.671 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732123940_collect-vmstat.pm.log 00:01:57.614 17:32:21 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:57.614 17:32:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:57.614 17:32:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:57.614 17:32:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:57.614 17:32:21 -- spdk/autobuild.sh@16 -- $ date -u 00:01:57.614 Wed Nov 20 05:32:21 PM UTC 2024 00:01:57.614 17:32:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:57.614 v25.01-pre-225-g5c8d99223 00:01:57.614 17:32:21 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:57.614 17:32:21 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:57.614 17:32:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:57.614 17:32:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:57.614 17:32:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.614 ************************************ 00:01:57.614 START TEST asan 00:01:57.614 ************************************ 00:01:57.614 using asan 00:01:57.614 17:32:21 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:57.614 00:01:57.614 real 0m0.000s 00:01:57.614 user 0m0.000s 00:01:57.614 sys 0m0.000s 00:01:57.614 17:32:21 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:57.614 ************************************ 00:01:57.614 END TEST asan 00:01:57.614 17:32:21 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:57.614 ************************************ 00:01:57.614 17:32:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:57.614 17:32:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:57.614 17:32:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:57.614 17:32:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:57.614 17:32:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.614 ************************************ 00:01:57.614 START TEST ubsan 00:01:57.614 ************************************ 00:01:57.614 using ubsan 00:01:57.614 17:32:21 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:57.614 00:01:57.614 real 0m0.000s 00:01:57.614 user 0m0.000s 00:01:57.614 sys 0m0.000s 00:01:57.614 17:32:21 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:57.614 ************************************ 00:01:57.614 END TEST ubsan 00:01:57.614 ************************************ 00:01:57.614 17:32:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:57.874 17:32:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:57.874 17:32:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:57.874 17:32:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:57.874 17:32:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:57.874 17:32:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:57.874 17:32:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:57.874 17:32:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:57.874 17:32:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:57.874 17:32:21 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:57.874 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:57.874 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:58.134 Using 'verbs' RDMA provider 00:02:09.077 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:19.125 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:19.697 Creating mk/config.mk...done. 00:02:19.697 Creating mk/cc.flags.mk...done. 00:02:19.697 Type 'make' to build. 00:02:19.697 17:32:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:19.697 17:32:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:19.697 17:32:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.697 17:32:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.697 ************************************ 00:02:19.697 START TEST make 00:02:19.697 ************************************ 00:02:19.697 17:32:43 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:19.697 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:19.697 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:19.697 meson setup builddir \ 00:02:19.697 -Dwith-libaio=enabled \ 00:02:19.697 -Dwith-liburing=enabled \ 00:02:19.697 -Dwith-libvfn=disabled \ 00:02:19.697 -Dwith-spdk=disabled \ 00:02:19.697 -Dexamples=false \ 00:02:19.697 -Dtests=false \ 00:02:19.697 -Dtools=false && \ 00:02:19.697 meson compile -C builddir && \ 00:02:19.697 cd -) 00:02:19.958 make[1]: Nothing to be done for 'all'. 00:02:21.896 The Meson build system 00:02:21.896 Version: 1.5.0 00:02:21.896 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:21.896 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:21.896 Build type: native build 00:02:21.896 Project name: xnvme 00:02:21.896 Project version: 0.7.5 00:02:21.896 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.896 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.896 Host machine cpu family: x86_64 00:02:21.896 Host machine cpu: x86_64 00:02:21.896 Message: host_machine.system: linux 00:02:21.896 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:21.896 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:21.896 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:21.896 Run-time dependency threads found: YES 00:02:21.896 Has header "setupapi.h" : NO 00:02:21.896 Has header "linux/blkzoned.h" : YES 00:02:21.896 Has header "linux/blkzoned.h" : YES (cached) 00:02:21.896 Has header "libaio.h" : YES 00:02:21.896 Library aio found: YES 00:02:21.896 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.896 Run-time dependency liburing found: YES 2.2 00:02:21.896 Dependency libvfn skipped: feature with-libvfn disabled 00:02:21.896 Found CMake: /usr/bin/cmake (3.27.7) 00:02:21.896 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:21.896 Subproject spdk : skipped: feature with-spdk disabled 00:02:21.896 Run-time dependency appleframeworks found: NO (tried framework) 00:02:21.896 Run-time dependency appleframeworks found: NO (tried framework) 00:02:21.896 Library rt found: YES 00:02:21.896 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:21.896 Configuring xnvme_config.h using configuration 00:02:21.896 Configuring xnvme.spec using configuration 00:02:21.896 Run-time dependency bash-completion found: YES 2.11 00:02:21.896 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:21.896 Program cp found: YES (/usr/bin/cp) 00:02:21.896 Build targets in project: 3 00:02:21.896 00:02:21.896 xnvme 0.7.5 00:02:21.896 00:02:21.896 Subprojects 00:02:21.896 spdk : NO Feature 'with-spdk' disabled 00:02:21.896 00:02:21.896 User defined options 00:02:21.896 examples : false 00:02:21.896 tests : false 00:02:21.896 tools : false 00:02:21.896 with-libaio : enabled 00:02:21.896 with-liburing: enabled 00:02:21.896 with-libvfn : disabled 00:02:21.896 with-spdk : disabled 00:02:21.896 00:02:21.896 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.158 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:22.158 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:22.158 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:22.158 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:22.158 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:22.158 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:22.158 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:22.419 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:22.419 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:22.419 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:22.419 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:22.419 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:22.419 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:22.419 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:22.419 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:22.419 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:22.419 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:22.419 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:22.419 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:22.419 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:22.419 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:22.419 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:22.419 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:22.419 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:22.419 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:22.419 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:22.419 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:22.419 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:22.419 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:22.419 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:22.419 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:22.419 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:22.679 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:22.679 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:22.679 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:22.679 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:22.679 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:22.679 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:22.679 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:22.679 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:22.679 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:22.679 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:22.679 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:22.679 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:22.679 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:22.679 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:22.679 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:22.679 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:22.679 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:22.679 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:22.679 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:22.679 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:22.679 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:22.679 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:22.679 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:22.679 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:22.679 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:22.679 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:22.679 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:22.679 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:22.679 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:22.940 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:22.940 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:22.940 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:22.940 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:22.940 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:22.940 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:22.940 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:22.941 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:22.941 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:22.941 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:22.941 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:22.941 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:23.231 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:23.231 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:23.508 [75/76] Linking static target lib/libxnvme.a 00:02:23.508 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:23.508 INFO: autodetecting backend as ninja 00:02:23.508 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:23.508 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:30.096 The Meson build system 00:02:30.096 Version: 1.5.0 00:02:30.096 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:30.096 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:30.096 Build type: native build 00:02:30.096 Program cat found: YES (/usr/bin/cat) 00:02:30.096 Project name: DPDK 00:02:30.096 Project version: 24.03.0 00:02:30.096 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:30.096 C linker for the host machine: cc ld.bfd 2.40-14 00:02:30.096 Host machine cpu family: x86_64 00:02:30.096 Host machine cpu: x86_64 00:02:30.096 Message: ## Building in Developer Mode ## 00:02:30.096 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:30.096 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:30.096 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:30.096 Program python3 found: YES (/usr/bin/python3) 00:02:30.096 Program cat found: YES (/usr/bin/cat) 00:02:30.096 Compiler for C supports arguments -march=native: YES 00:02:30.096 Checking for size of "void *" : 8 00:02:30.096 Checking for size of "void *" : 8 (cached) 00:02:30.096 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:30.096 Library m found: YES 00:02:30.096 Library numa found: YES 00:02:30.096 Has header "numaif.h" : YES 00:02:30.096 Library fdt found: NO 00:02:30.096 Library execinfo found: NO 00:02:30.096 Has header "execinfo.h" : YES 00:02:30.096 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:30.096 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:30.096 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:30.096 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:30.096 Run-time dependency openssl found: YES 3.1.1 00:02:30.096 Run-time dependency libpcap found: YES 1.10.4 00:02:30.096 Has header "pcap.h" with dependency libpcap: YES 00:02:30.096 Compiler for C supports arguments -Wcast-qual: YES 00:02:30.096 Compiler for C supports arguments -Wdeprecated: YES 00:02:30.096 Compiler for C supports arguments -Wformat: YES 00:02:30.096 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:30.096 Compiler for C supports arguments -Wformat-security: NO 00:02:30.096 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:30.096 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:30.096 Compiler for C supports arguments -Wnested-externs: YES 00:02:30.096 Compiler for C supports arguments -Wold-style-definition: YES 00:02:30.096 Compiler for C supports arguments -Wpointer-arith: YES 00:02:30.096 Compiler for C supports arguments -Wsign-compare: YES 00:02:30.096 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:30.096 Compiler for C supports arguments -Wundef: YES 00:02:30.096 Compiler for C supports arguments -Wwrite-strings: YES 00:02:30.096 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:30.096 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:30.096 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:30.096 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:30.096 Program objdump found: YES (/usr/bin/objdump) 00:02:30.096 Compiler for C supports arguments -mavx512f: YES 00:02:30.096 Checking if "AVX512 checking" compiles: YES 00:02:30.096 Fetching value of define "__SSE4_2__" : 1 00:02:30.096 Fetching value of define "__AES__" : 1 00:02:30.096 Fetching value of define "__AVX__" : 1 00:02:30.096 Fetching value of define "__AVX2__" : 1 00:02:30.096 Fetching value of define "__AVX512BW__" : 1 00:02:30.096 Fetching value of define "__AVX512CD__" : 1 00:02:30.096 Fetching value of define "__AVX512DQ__" : 1 00:02:30.096 Fetching value of define "__AVX512F__" : 1 00:02:30.096 Fetching value of define "__AVX512VL__" : 1 00:02:30.096 Fetching value of define "__PCLMUL__" : 1 00:02:30.096 Fetching value of define "__RDRND__" : 1 00:02:30.096 Fetching value of define "__RDSEED__" : 1 00:02:30.096 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:30.096 Fetching value of define "__znver1__" : (undefined) 00:02:30.096 Fetching value of define "__znver2__" : (undefined) 00:02:30.096 Fetching value of define "__znver3__" : (undefined) 00:02:30.096 Fetching value of define "__znver4__" : (undefined) 00:02:30.096 Library asan found: YES 00:02:30.096 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:30.096 Message: lib/log: Defining dependency "log" 00:02:30.096 Message: lib/kvargs: Defining dependency "kvargs" 00:02:30.096 Message: lib/telemetry: Defining dependency "telemetry" 00:02:30.096 Library rt found: YES 00:02:30.096 Checking for function "getentropy" : NO 00:02:30.096 Message: lib/eal: Defining dependency "eal" 00:02:30.096 Message: lib/ring: Defining dependency "ring" 00:02:30.096 Message: lib/rcu: Defining dependency "rcu" 00:02:30.096 Message: lib/mempool: Defining dependency "mempool" 00:02:30.096 Message: lib/mbuf: Defining dependency "mbuf" 00:02:30.096 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:30.096 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:30.096 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:30.096 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:30.096 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:30.096 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:30.096 Compiler for C supports arguments -mpclmul: YES 00:02:30.096 Compiler for C supports arguments -maes: YES 00:02:30.096 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:30.096 Compiler for C supports arguments -mavx512bw: YES 00:02:30.096 Compiler for C supports arguments -mavx512dq: YES 00:02:30.096 Compiler for C supports arguments -mavx512vl: YES 00:02:30.096 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:30.096 Compiler for C supports arguments -mavx2: YES 00:02:30.096 Compiler for C supports arguments -mavx: YES 00:02:30.096 Message: lib/net: Defining dependency "net" 00:02:30.096 Message: lib/meter: Defining dependency "meter" 00:02:30.096 Message: lib/ethdev: Defining dependency "ethdev" 00:02:30.096 Message: lib/pci: Defining dependency "pci" 00:02:30.096 Message: lib/cmdline: Defining dependency "cmdline" 00:02:30.096 Message: lib/hash: Defining dependency "hash" 00:02:30.096 Message: lib/timer: Defining dependency "timer" 00:02:30.096 Message: lib/compressdev: Defining dependency "compressdev" 00:02:30.096 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:30.096 Message: lib/dmadev: Defining dependency "dmadev" 00:02:30.096 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:30.096 Message: lib/power: Defining dependency "power" 00:02:30.096 Message: lib/reorder: Defining dependency "reorder" 00:02:30.096 Message: lib/security: Defining dependency "security" 00:02:30.096 Has header "linux/userfaultfd.h" : YES 00:02:30.096 Has header "linux/vduse.h" : YES 00:02:30.096 Message: lib/vhost: Defining dependency "vhost" 00:02:30.096 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:30.096 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:30.096 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:30.096 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:30.096 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:30.096 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:30.096 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:30.096 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:30.096 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:30.096 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:30.096 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:30.096 Configuring doxy-api-html.conf using configuration 00:02:30.096 Configuring doxy-api-man.conf using configuration 00:02:30.096 Program mandb found: YES (/usr/bin/mandb) 00:02:30.096 Program sphinx-build found: NO 00:02:30.096 Configuring rte_build_config.h using configuration 00:02:30.096 Message: 00:02:30.096 ================= 00:02:30.096 Applications Enabled 00:02:30.096 ================= 00:02:30.096 00:02:30.096 apps: 00:02:30.096 00:02:30.096 00:02:30.096 Message: 00:02:30.096 ================= 00:02:30.096 Libraries Enabled 00:02:30.096 ================= 00:02:30.096 00:02:30.096 libs: 00:02:30.096 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:30.096 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:30.096 cryptodev, dmadev, power, reorder, security, vhost, 00:02:30.096 00:02:30.096 Message: 00:02:30.096 =============== 00:02:30.096 Drivers Enabled 00:02:30.096 =============== 00:02:30.096 00:02:30.096 common: 00:02:30.096 00:02:30.096 bus: 00:02:30.096 pci, vdev, 00:02:30.096 mempool: 00:02:30.096 ring, 00:02:30.096 dma: 00:02:30.096 00:02:30.096 net: 00:02:30.096 00:02:30.096 crypto: 00:02:30.096 00:02:30.096 compress: 00:02:30.096 00:02:30.096 vdpa: 00:02:30.096 00:02:30.096 00:02:30.096 Message: 00:02:30.096 ================= 00:02:30.097 Content Skipped 00:02:30.097 ================= 00:02:30.097 00:02:30.097 apps: 00:02:30.097 dumpcap: explicitly disabled via build config 00:02:30.097 graph: explicitly disabled via build config 00:02:30.097 pdump: explicitly disabled via build config 00:02:30.097 proc-info: explicitly disabled via build config 00:02:30.097 test-acl: explicitly disabled via build config 00:02:30.097 test-bbdev: explicitly disabled via build config 00:02:30.097 test-cmdline: explicitly disabled via build config 00:02:30.097 test-compress-perf: explicitly disabled via build config 00:02:30.097 test-crypto-perf: explicitly disabled via build config 00:02:30.097 test-dma-perf: explicitly disabled via build config 00:02:30.097 test-eventdev: explicitly disabled via build config 00:02:30.097 test-fib: explicitly disabled via build config 00:02:30.097 test-flow-perf: explicitly disabled via build config 00:02:30.097 test-gpudev: explicitly disabled via build config 00:02:30.097 test-mldev: explicitly disabled via build config 00:02:30.097 test-pipeline: explicitly disabled via build config 00:02:30.097 test-pmd: explicitly disabled via build config 00:02:30.097 test-regex: explicitly disabled via build config 00:02:30.097 test-sad: explicitly disabled via build config 00:02:30.097 test-security-perf: explicitly disabled via build config 00:02:30.097 00:02:30.097 libs: 00:02:30.097 argparse: explicitly disabled via build config 00:02:30.097 metrics: explicitly disabled via build config 00:02:30.097 acl: explicitly disabled via build config 00:02:30.097 bbdev: explicitly disabled via build config 00:02:30.097 bitratestats: explicitly disabled via build config 00:02:30.097 bpf: explicitly disabled via build config 00:02:30.097 cfgfile: explicitly disabled via build config 00:02:30.097 distributor: explicitly disabled via build config 00:02:30.097 efd: explicitly disabled via build config 00:02:30.097 eventdev: explicitly disabled via build config 00:02:30.097 dispatcher: explicitly disabled via build config 00:02:30.097 gpudev: explicitly disabled via build config 00:02:30.097 gro: explicitly disabled via build config 00:02:30.097 gso: explicitly disabled via build config 00:02:30.097 ip_frag: explicitly disabled via build config 00:02:30.097 jobstats: explicitly disabled via build config 00:02:30.097 latencystats: explicitly disabled via build config 00:02:30.097 lpm: explicitly disabled via build config 00:02:30.097 member: explicitly disabled via build config 00:02:30.097 pcapng: explicitly disabled via build config 00:02:30.097 rawdev: explicitly disabled via build config 00:02:30.097 regexdev: explicitly disabled via build config 00:02:30.097 mldev: explicitly disabled via build config 00:02:30.097 rib: explicitly disabled via build config 00:02:30.097 sched: explicitly disabled via build config 00:02:30.097 stack: explicitly disabled via build config 00:02:30.097 ipsec: explicitly disabled via build config 00:02:30.097 pdcp: explicitly disabled via build config 00:02:30.097 fib: explicitly disabled via build config 00:02:30.097 port: explicitly disabled via build config 00:02:30.097 pdump: explicitly disabled via build config 00:02:30.097 table: explicitly disabled via build config 00:02:30.097 pipeline: explicitly disabled via build config 00:02:30.097 graph: explicitly disabled via build config 00:02:30.097 node: explicitly disabled via build config 00:02:30.097 00:02:30.097 drivers: 00:02:30.097 common/cpt: not in enabled drivers build config 00:02:30.097 common/dpaax: not in enabled drivers build config 00:02:30.097 common/iavf: not in enabled drivers build config 00:02:30.097 common/idpf: not in enabled drivers build config 00:02:30.097 common/ionic: not in enabled drivers build config 00:02:30.097 common/mvep: not in enabled drivers build config 00:02:30.097 common/octeontx: not in enabled drivers build config 00:02:30.097 bus/auxiliary: not in enabled drivers build config 00:02:30.097 bus/cdx: not in enabled drivers build config 00:02:30.097 bus/dpaa: not in enabled drivers build config 00:02:30.097 bus/fslmc: not in enabled drivers build config 00:02:30.097 bus/ifpga: not in enabled drivers build config 00:02:30.097 bus/platform: not in enabled drivers build config 00:02:30.097 bus/uacce: not in enabled drivers build config 00:02:30.097 bus/vmbus: not in enabled drivers build config 00:02:30.097 common/cnxk: not in enabled drivers build config 00:02:30.097 common/mlx5: not in enabled drivers build config 00:02:30.097 common/nfp: not in enabled drivers build config 00:02:30.097 common/nitrox: not in enabled drivers build config 00:02:30.097 common/qat: not in enabled drivers build config 00:02:30.097 common/sfc_efx: not in enabled drivers build config 00:02:30.097 mempool/bucket: not in enabled drivers build config 00:02:30.097 mempool/cnxk: not in enabled drivers build config 00:02:30.097 mempool/dpaa: not in enabled drivers build config 00:02:30.097 mempool/dpaa2: not in enabled drivers build config 00:02:30.097 mempool/octeontx: not in enabled drivers build config 00:02:30.097 mempool/stack: not in enabled drivers build config 00:02:30.097 dma/cnxk: not in enabled drivers build config 00:02:30.097 dma/dpaa: not in enabled drivers build config 00:02:30.097 dma/dpaa2: not in enabled drivers build config 00:02:30.097 dma/hisilicon: not in enabled drivers build config 00:02:30.097 dma/idxd: not in enabled drivers build config 00:02:30.097 dma/ioat: not in enabled drivers build config 00:02:30.097 dma/skeleton: not in enabled drivers build config 00:02:30.097 net/af_packet: not in enabled drivers build config 00:02:30.097 net/af_xdp: not in enabled drivers build config 00:02:30.097 net/ark: not in enabled drivers build config 00:02:30.097 net/atlantic: not in enabled drivers build config 00:02:30.097 net/avp: not in enabled drivers build config 00:02:30.097 net/axgbe: not in enabled drivers build config 00:02:30.097 net/bnx2x: not in enabled drivers build config 00:02:30.097 net/bnxt: not in enabled drivers build config 00:02:30.097 net/bonding: not in enabled drivers build config 00:02:30.097 net/cnxk: not in enabled drivers build config 00:02:30.097 net/cpfl: not in enabled drivers build config 00:02:30.097 net/cxgbe: not in enabled drivers build config 00:02:30.097 net/dpaa: not in enabled drivers build config 00:02:30.097 net/dpaa2: not in enabled drivers build config 00:02:30.097 net/e1000: not in enabled drivers build config 00:02:30.097 net/ena: not in enabled drivers build config 00:02:30.097 net/enetc: not in enabled drivers build config 00:02:30.097 net/enetfec: not in enabled drivers build config 00:02:30.097 net/enic: not in enabled drivers build config 00:02:30.097 net/failsafe: not in enabled drivers build config 00:02:30.097 net/fm10k: not in enabled drivers build config 00:02:30.097 net/gve: not in enabled drivers build config 00:02:30.097 net/hinic: not in enabled drivers build config 00:02:30.097 net/hns3: not in enabled drivers build config 00:02:30.097 net/i40e: not in enabled drivers build config 00:02:30.097 net/iavf: not in enabled drivers build config 00:02:30.097 net/ice: not in enabled drivers build config 00:02:30.097 net/idpf: not in enabled drivers build config 00:02:30.097 net/igc: not in enabled drivers build config 00:02:30.097 net/ionic: not in enabled drivers build config 00:02:30.097 net/ipn3ke: not in enabled drivers build config 00:02:30.097 net/ixgbe: not in enabled drivers build config 00:02:30.097 net/mana: not in enabled drivers build config 00:02:30.097 net/memif: not in enabled drivers build config 00:02:30.097 net/mlx4: not in enabled drivers build config 00:02:30.097 net/mlx5: not in enabled drivers build config 00:02:30.097 net/mvneta: not in enabled drivers build config 00:02:30.097 net/mvpp2: not in enabled drivers build config 00:02:30.097 net/netvsc: not in enabled drivers build config 00:02:30.097 net/nfb: not in enabled drivers build config 00:02:30.097 net/nfp: not in enabled drivers build config 00:02:30.097 net/ngbe: not in enabled drivers build config 00:02:30.097 net/null: not in enabled drivers build config 00:02:30.097 net/octeontx: not in enabled drivers build config 00:02:30.097 net/octeon_ep: not in enabled drivers build config 00:02:30.097 net/pcap: not in enabled drivers build config 00:02:30.097 net/pfe: not in enabled drivers build config 00:02:30.097 net/qede: not in enabled drivers build config 00:02:30.097 net/ring: not in enabled drivers build config 00:02:30.097 net/sfc: not in enabled drivers build config 00:02:30.097 net/softnic: not in enabled drivers build config 00:02:30.097 net/tap: not in enabled drivers build config 00:02:30.097 net/thunderx: not in enabled drivers build config 00:02:30.097 net/txgbe: not in enabled drivers build config 00:02:30.097 net/vdev_netvsc: not in enabled drivers build config 00:02:30.097 net/vhost: not in enabled drivers build config 00:02:30.097 net/virtio: not in enabled drivers build config 00:02:30.097 net/vmxnet3: not in enabled drivers build config 00:02:30.097 raw/*: missing internal dependency, "rawdev" 00:02:30.097 crypto/armv8: not in enabled drivers build config 00:02:30.097 crypto/bcmfs: not in enabled drivers build config 00:02:30.097 crypto/caam_jr: not in enabled drivers build config 00:02:30.097 crypto/ccp: not in enabled drivers build config 00:02:30.097 crypto/cnxk: not in enabled drivers build config 00:02:30.097 crypto/dpaa_sec: not in enabled drivers build config 00:02:30.097 crypto/dpaa2_sec: not in enabled drivers build config 00:02:30.097 crypto/ipsec_mb: not in enabled drivers build config 00:02:30.097 crypto/mlx5: not in enabled drivers build config 00:02:30.097 crypto/mvsam: not in enabled drivers build config 00:02:30.097 crypto/nitrox: not in enabled drivers build config 00:02:30.097 crypto/null: not in enabled drivers build config 00:02:30.097 crypto/octeontx: not in enabled drivers build config 00:02:30.097 crypto/openssl: not in enabled drivers build config 00:02:30.097 crypto/scheduler: not in enabled drivers build config 00:02:30.097 crypto/uadk: not in enabled drivers build config 00:02:30.097 crypto/virtio: not in enabled drivers build config 00:02:30.097 compress/isal: not in enabled drivers build config 00:02:30.097 compress/mlx5: not in enabled drivers build config 00:02:30.097 compress/nitrox: not in enabled drivers build config 00:02:30.097 compress/octeontx: not in enabled drivers build config 00:02:30.097 compress/zlib: not in enabled drivers build config 00:02:30.097 regex/*: missing internal dependency, "regexdev" 00:02:30.097 ml/*: missing internal dependency, "mldev" 00:02:30.097 vdpa/ifc: not in enabled drivers build config 00:02:30.097 vdpa/mlx5: not in enabled drivers build config 00:02:30.097 vdpa/nfp: not in enabled drivers build config 00:02:30.098 vdpa/sfc: not in enabled drivers build config 00:02:30.098 event/*: missing internal dependency, "eventdev" 00:02:30.098 baseband/*: missing internal dependency, "bbdev" 00:02:30.098 gpu/*: missing internal dependency, "gpudev" 00:02:30.098 00:02:30.098 00:02:30.098 Build targets in project: 84 00:02:30.098 00:02:30.098 DPDK 24.03.0 00:02:30.098 00:02:30.098 User defined options 00:02:30.098 buildtype : debug 00:02:30.098 default_library : shared 00:02:30.098 libdir : lib 00:02:30.098 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:30.098 b_sanitize : address 00:02:30.098 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:30.098 c_link_args : 00:02:30.098 cpu_instruction_set: native 00:02:30.098 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:30.098 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:30.098 enable_docs : false 00:02:30.098 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:30.098 enable_kmods : false 00:02:30.098 max_lcores : 128 00:02:30.098 tests : false 00:02:30.098 00:02:30.098 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:30.098 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:30.098 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:30.098 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:30.098 [3/267] Linking static target lib/librte_kvargs.a 00:02:30.098 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:30.098 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:30.098 [6/267] Linking static target lib/librte_log.a 00:02:30.358 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:30.358 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:30.358 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:30.358 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:30.358 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:30.358 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:30.358 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.358 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.358 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:30.618 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.618 [17/267] Linking static target lib/librte_telemetry.a 00:02:30.618 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.879 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.879 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.879 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.879 [22/267] Linking target lib/librte_log.so.24.1 00:02:30.879 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.879 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.879 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:31.140 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:31.140 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:31.140 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:31.140 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:31.140 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:31.140 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:31.140 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:31.140 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.400 [34/267] Linking target lib/librte_telemetry.so.24.1 00:02:31.400 [35/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:31.400 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:31.400 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:31.400 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:31.400 [39/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.400 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:31.400 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:31.400 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.400 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:31.400 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:31.400 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:31.660 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:31.660 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:31.660 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:31.919 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:31.919 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:31.919 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:31.919 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:31.919 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:31.919 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:31.919 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:31.919 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:32.177 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:32.177 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:32.177 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:32.177 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:32.177 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:32.177 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:32.177 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:32.177 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:32.435 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:32.435 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:32.435 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:32.435 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:32.693 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:32.693 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:32.693 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:32.693 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:32.693 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:32.693 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:32.693 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:32.693 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:32.693 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:32.693 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:32.951 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:32.951 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:32.951 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:32.951 [82/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:32.951 [83/267] Linking static target lib/librte_ring.a 00:02:32.951 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:33.209 [85/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:33.209 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:33.209 [87/267] Linking static target lib/librte_eal.a 00:02:33.209 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:33.467 [89/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:33.467 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:33.467 [91/267] Linking static target lib/librte_rcu.a 00:02:33.467 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:33.467 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:33.467 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:33.467 [95/267] Linking static target lib/librte_mempool.a 00:02:33.467 [96/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.467 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:33.725 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:33.725 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:33.725 [100/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:33.725 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:33.725 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.983 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:33.983 [104/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:34.241 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:34.241 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:34.241 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:34.241 [108/267] Linking static target lib/librte_meter.a 00:02:34.241 [109/267] Linking static target lib/librte_net.a 00:02:34.241 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:34.499 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:34.499 [112/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.499 [113/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.499 [114/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.499 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:34.499 [116/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:34.499 [117/267] Linking static target lib/librte_mbuf.a 00:02:34.758 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:34.758 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:34.758 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:35.015 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:35.015 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:35.015 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:35.273 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:35.273 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:35.273 [126/267] Linking static target lib/librte_pci.a 00:02:35.273 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:35.273 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:35.531 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:35.531 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:35.531 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:35.531 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:35.531 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:35.531 [134/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.531 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:35.531 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:35.531 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:35.531 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.531 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:35.531 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:35.531 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:35.531 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:35.531 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:35.531 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:35.789 [145/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:35.789 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:35.789 [147/267] Linking static target lib/librte_cmdline.a 00:02:36.047 [148/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:36.047 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:36.047 [150/267] Linking static target lib/librte_timer.a 00:02:36.047 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:36.047 [152/267] Linking static target lib/librte_ethdev.a 00:02:36.047 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:36.305 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:36.305 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:36.305 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:36.563 [157/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.563 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:36.563 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:36.563 [160/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:36.563 [161/267] Linking static target lib/librte_compressdev.a 00:02:36.563 [162/267] Linking static target lib/librte_hash.a 00:02:36.563 [163/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:36.563 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:36.821 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:36.821 [166/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:36.821 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:36.821 [168/267] Linking static target lib/librte_dmadev.a 00:02:36.821 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:36.821 [170/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:37.078 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:37.078 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.339 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.339 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:37.339 [175/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:37.339 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:37.339 [177/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.339 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:37.598 [179/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:37.598 [180/267] Linking static target lib/librte_cryptodev.a 00:02:37.598 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:37.598 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:37.598 [183/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.598 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:37.598 [185/267] Linking static target lib/librte_power.a 00:02:37.856 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:37.856 [187/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:37.856 [188/267] Linking static target lib/librte_security.a 00:02:37.856 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:38.115 [190/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:38.115 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:38.115 [192/267] Linking static target lib/librte_reorder.a 00:02:38.374 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:38.374 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.633 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.633 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.633 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:38.633 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:38.891 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:38.891 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:38.891 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:39.149 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:39.149 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:39.149 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:39.149 [205/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:39.149 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:39.408 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:39.408 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:39.408 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:39.408 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:39.408 [211/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.666 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.666 [213/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.666 [214/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:39.666 [215/267] Linking static target drivers/librte_bus_vdev.a 00:02:39.666 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.666 [217/267] Linking static target drivers/librte_bus_pci.a 00:02:39.666 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.667 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:39.667 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:39.925 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.925 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:39.925 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.925 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.925 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:39.925 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.860 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.427 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.427 [229/267] Linking target lib/librte_eal.so.24.1 00:02:41.427 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.427 [231/267] Linking target lib/librte_meter.so.24.1 00:02:41.427 [232/267] Linking target lib/librte_timer.so.24.1 00:02:41.427 [233/267] Linking target lib/librte_ring.so.24.1 00:02:41.428 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:41.428 [235/267] Linking target lib/librte_pci.so.24.1 00:02:41.686 [236/267] Linking target lib/librte_dmadev.so.24.1 00:02:41.686 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:41.686 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:41.686 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:41.686 [240/267] Linking target lib/librte_mempool.so.24.1 00:02:41.686 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:41.686 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:41.686 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:41.686 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:41.686 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:41.686 [246/267] Linking target lib/librte_mbuf.so.24.1 00:02:41.686 [247/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:41.686 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:41.944 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:41.944 [250/267] Linking target lib/librte_net.so.24.1 00:02:41.944 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:41.944 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:02:41.944 [253/267] Linking target lib/librte_compressdev.so.24.1 00:02:41.944 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:41.944 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:42.203 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:42.203 [257/267] Linking target lib/librte_hash.so.24.1 00:02:42.203 [258/267] Linking target lib/librte_security.so.24.1 00:02:42.203 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.203 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:42.203 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:42.462 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:42.462 [263/267] Linking target lib/librte_power.so.24.1 00:02:43.398 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:43.398 [265/267] Linking static target lib/librte_vhost.a 00:02:44.334 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.593 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:44.593 INFO: autodetecting backend as ninja 00:02:44.593 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:02.743 CC lib/ut_mock/mock.o 00:03:02.743 CC lib/log/log_flags.o 00:03:02.743 CC lib/log/log_deprecated.o 00:03:02.743 CC lib/log/log.o 00:03:02.743 CC lib/ut/ut.o 00:03:02.743 LIB libspdk_ut_mock.a 00:03:02.743 LIB libspdk_ut.a 00:03:02.743 LIB libspdk_log.a 00:03:02.743 SO libspdk_ut_mock.so.6.0 00:03:02.743 SO libspdk_ut.so.2.0 00:03:02.743 SO libspdk_log.so.7.1 00:03:02.743 SYMLINK libspdk_ut.so 00:03:02.743 SYMLINK libspdk_ut_mock.so 00:03:02.743 SYMLINK libspdk_log.so 00:03:02.743 CC lib/dma/dma.o 00:03:02.743 CC lib/util/base64.o 00:03:02.743 CC lib/util/bit_array.o 00:03:02.743 CC lib/util/cpuset.o 00:03:02.743 CC lib/ioat/ioat.o 00:03:02.743 CXX lib/trace_parser/trace.o 00:03:02.743 CC lib/util/crc16.o 00:03:02.743 CC lib/util/crc32.o 00:03:02.743 CC lib/util/crc32c.o 00:03:02.743 CC lib/util/crc32_ieee.o 00:03:02.743 CC lib/vfio_user/host/vfio_user_pci.o 00:03:02.743 CC lib/util/crc64.o 00:03:02.743 CC lib/util/dif.o 00:03:02.743 CC lib/util/fd.o 00:03:02.743 CC lib/util/fd_group.o 00:03:02.743 CC lib/util/file.o 00:03:02.743 LIB libspdk_dma.a 00:03:02.743 CC lib/util/hexlify.o 00:03:02.743 SO libspdk_dma.so.5.0 00:03:02.743 CC lib/util/iov.o 00:03:02.743 CC lib/util/math.o 00:03:02.743 LIB libspdk_ioat.a 00:03:02.743 SYMLINK libspdk_dma.so 00:03:02.743 CC lib/vfio_user/host/vfio_user.o 00:03:02.743 SO libspdk_ioat.so.7.0 00:03:02.743 CC lib/util/net.o 00:03:02.743 SYMLINK libspdk_ioat.so 00:03:02.743 CC lib/util/pipe.o 00:03:02.743 CC lib/util/strerror_tls.o 00:03:02.743 CC lib/util/string.o 00:03:02.743 CC lib/util/uuid.o 00:03:02.743 CC lib/util/xor.o 00:03:02.743 LIB libspdk_vfio_user.a 00:03:02.743 CC lib/util/zipf.o 00:03:02.743 SO libspdk_vfio_user.so.5.0 00:03:02.743 CC lib/util/md5.o 00:03:02.743 SYMLINK libspdk_vfio_user.so 00:03:02.743 LIB libspdk_util.a 00:03:02.743 SO libspdk_util.so.10.1 00:03:02.743 LIB libspdk_trace_parser.a 00:03:02.743 SO libspdk_trace_parser.so.6.0 00:03:02.743 SYMLINK libspdk_util.so 00:03:02.743 SYMLINK libspdk_trace_parser.so 00:03:02.743 CC lib/conf/conf.o 00:03:02.743 CC lib/idxd/idxd.o 00:03:02.743 CC lib/idxd/idxd_user.o 00:03:02.743 CC lib/idxd/idxd_kernel.o 00:03:02.743 CC lib/vmd/vmd.o 00:03:02.743 CC lib/rdma_utils/rdma_utils.o 00:03:02.743 CC lib/json/json_parse.o 00:03:02.743 CC lib/vmd/led.o 00:03:02.743 CC lib/json/json_util.o 00:03:02.743 CC lib/env_dpdk/env.o 00:03:02.743 CC lib/env_dpdk/memory.o 00:03:02.743 CC lib/env_dpdk/pci.o 00:03:02.743 LIB libspdk_conf.a 00:03:02.743 SO libspdk_conf.so.6.0 00:03:02.743 CC lib/env_dpdk/init.o 00:03:02.743 CC lib/env_dpdk/threads.o 00:03:02.743 SYMLINK libspdk_conf.so 00:03:02.743 LIB libspdk_rdma_utils.a 00:03:02.743 CC lib/env_dpdk/pci_ioat.o 00:03:02.743 CC lib/json/json_write.o 00:03:02.743 SO libspdk_rdma_utils.so.1.0 00:03:02.743 SYMLINK libspdk_rdma_utils.so 00:03:02.743 CC lib/env_dpdk/pci_virtio.o 00:03:02.743 CC lib/env_dpdk/pci_vmd.o 00:03:02.743 CC lib/env_dpdk/pci_idxd.o 00:03:02.743 CC lib/env_dpdk/pci_event.o 00:03:02.743 CC lib/env_dpdk/sigbus_handler.o 00:03:02.743 CC lib/env_dpdk/pci_dpdk.o 00:03:02.743 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.743 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.743 LIB libspdk_idxd.a 00:03:02.743 LIB libspdk_json.a 00:03:02.743 SO libspdk_idxd.so.12.1 00:03:02.744 CC lib/rdma_provider/common.o 00:03:02.744 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:02.744 SO libspdk_json.so.6.0 00:03:02.744 SYMLINK libspdk_idxd.so 00:03:02.744 SYMLINK libspdk_json.so 00:03:02.744 LIB libspdk_vmd.a 00:03:02.744 SO libspdk_vmd.so.6.0 00:03:02.744 LIB libspdk_rdma_provider.a 00:03:02.744 SYMLINK libspdk_vmd.so 00:03:02.744 SO libspdk_rdma_provider.so.7.0 00:03:02.744 SYMLINK libspdk_rdma_provider.so 00:03:02.744 CC lib/jsonrpc/jsonrpc_server.o 00:03:02.744 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:02.744 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:02.744 CC lib/jsonrpc/jsonrpc_client.o 00:03:02.744 LIB libspdk_jsonrpc.a 00:03:02.744 SO libspdk_jsonrpc.so.6.0 00:03:02.744 SYMLINK libspdk_jsonrpc.so 00:03:03.002 CC lib/rpc/rpc.o 00:03:03.002 LIB libspdk_env_dpdk.a 00:03:03.002 SO libspdk_env_dpdk.so.15.1 00:03:03.002 LIB libspdk_rpc.a 00:03:03.002 SO libspdk_rpc.so.6.0 00:03:03.002 SYMLINK libspdk_env_dpdk.so 00:03:03.259 SYMLINK libspdk_rpc.so 00:03:03.259 CC lib/notify/notify_rpc.o 00:03:03.259 CC lib/trace/trace.o 00:03:03.259 CC lib/notify/notify.o 00:03:03.259 CC lib/trace/trace_rpc.o 00:03:03.259 CC lib/trace/trace_flags.o 00:03:03.259 CC lib/keyring/keyring.o 00:03:03.259 CC lib/keyring/keyring_rpc.o 00:03:03.517 LIB libspdk_notify.a 00:03:03.517 SO libspdk_notify.so.6.0 00:03:03.517 LIB libspdk_trace.a 00:03:03.517 SYMLINK libspdk_notify.so 00:03:03.517 LIB libspdk_keyring.a 00:03:03.517 SO libspdk_trace.so.11.0 00:03:03.517 SO libspdk_keyring.so.2.0 00:03:03.517 SYMLINK libspdk_trace.so 00:03:03.517 SYMLINK libspdk_keyring.so 00:03:03.775 CC lib/sock/sock.o 00:03:03.775 CC lib/sock/sock_rpc.o 00:03:03.775 CC lib/thread/thread.o 00:03:03.775 CC lib/thread/iobuf.o 00:03:04.034 LIB libspdk_sock.a 00:03:04.034 SO libspdk_sock.so.10.0 00:03:04.291 SYMLINK libspdk_sock.so 00:03:04.549 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:04.549 CC lib/nvme/nvme_fabric.o 00:03:04.549 CC lib/nvme/nvme_ns_cmd.o 00:03:04.549 CC lib/nvme/nvme_ctrlr.o 00:03:04.549 CC lib/nvme/nvme_ns.o 00:03:04.549 CC lib/nvme/nvme_qpair.o 00:03:04.549 CC lib/nvme/nvme_pcie.o 00:03:04.549 CC lib/nvme/nvme.o 00:03:04.549 CC lib/nvme/nvme_pcie_common.o 00:03:04.807 CC lib/nvme/nvme_quirks.o 00:03:05.064 CC lib/nvme/nvme_transport.o 00:03:05.065 LIB libspdk_thread.a 00:03:05.065 CC lib/nvme/nvme_discovery.o 00:03:05.065 SO libspdk_thread.so.11.0 00:03:05.065 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:05.065 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:05.065 SYMLINK libspdk_thread.so 00:03:05.323 CC lib/nvme/nvme_tcp.o 00:03:05.323 CC lib/nvme/nvme_opal.o 00:03:05.323 CC lib/accel/accel.o 00:03:05.323 CC lib/blob/blobstore.o 00:03:05.583 CC lib/nvme/nvme_io_msg.o 00:03:05.583 CC lib/init/json_config.o 00:03:05.842 CC lib/nvme/nvme_poll_group.o 00:03:05.842 CC lib/virtio/virtio.o 00:03:05.842 CC lib/fsdev/fsdev.o 00:03:05.842 CC lib/nvme/nvme_zns.o 00:03:05.842 CC lib/init/subsystem.o 00:03:06.101 CC lib/nvme/nvme_stubs.o 00:03:06.101 CC lib/init/subsystem_rpc.o 00:03:06.101 CC lib/virtio/virtio_vhost_user.o 00:03:06.101 CC lib/init/rpc.o 00:03:06.101 CC lib/accel/accel_rpc.o 00:03:06.382 CC lib/nvme/nvme_auth.o 00:03:06.382 CC lib/fsdev/fsdev_io.o 00:03:06.382 LIB libspdk_init.a 00:03:06.382 SO libspdk_init.so.6.0 00:03:06.382 CC lib/accel/accel_sw.o 00:03:06.382 SYMLINK libspdk_init.so 00:03:06.382 CC lib/blob/request.o 00:03:06.382 CC lib/virtio/virtio_vfio_user.o 00:03:06.382 CC lib/blob/zeroes.o 00:03:06.382 CC lib/nvme/nvme_cuse.o 00:03:06.640 CC lib/nvme/nvme_rdma.o 00:03:06.640 LIB libspdk_accel.a 00:03:06.640 CC lib/virtio/virtio_pci.o 00:03:06.640 SO libspdk_accel.so.16.0 00:03:06.640 CC lib/fsdev/fsdev_rpc.o 00:03:06.640 SYMLINK libspdk_accel.so 00:03:06.640 CC lib/blob/blob_bs_dev.o 00:03:06.640 LIB libspdk_fsdev.a 00:03:06.898 CC lib/event/app.o 00:03:06.898 SO libspdk_fsdev.so.2.0 00:03:06.898 CC lib/event/reactor.o 00:03:06.898 CC lib/bdev/bdev.o 00:03:06.898 LIB libspdk_virtio.a 00:03:06.898 SYMLINK libspdk_fsdev.so 00:03:06.898 CC lib/bdev/bdev_rpc.o 00:03:06.898 SO libspdk_virtio.so.7.0 00:03:06.898 SYMLINK libspdk_virtio.so 00:03:06.898 CC lib/event/log_rpc.o 00:03:07.157 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:07.157 CC lib/event/app_rpc.o 00:03:07.157 CC lib/bdev/bdev_zone.o 00:03:07.157 CC lib/event/scheduler_static.o 00:03:07.157 CC lib/bdev/part.o 00:03:07.157 CC lib/bdev/scsi_nvme.o 00:03:07.415 LIB libspdk_event.a 00:03:07.415 SO libspdk_event.so.14.0 00:03:07.415 SYMLINK libspdk_event.so 00:03:07.674 LIB libspdk_nvme.a 00:03:07.674 LIB libspdk_fuse_dispatcher.a 00:03:07.674 SO libspdk_nvme.so.15.0 00:03:07.674 SO libspdk_fuse_dispatcher.so.1.0 00:03:07.932 SYMLINK libspdk_fuse_dispatcher.so 00:03:07.932 SYMLINK libspdk_nvme.so 00:03:08.867 LIB libspdk_blob.a 00:03:08.867 SO libspdk_blob.so.11.0 00:03:08.867 SYMLINK libspdk_blob.so 00:03:09.125 CC lib/blobfs/tree.o 00:03:09.125 CC lib/blobfs/blobfs.o 00:03:09.125 CC lib/lvol/lvol.o 00:03:09.692 LIB libspdk_bdev.a 00:03:09.692 LIB libspdk_blobfs.a 00:03:09.692 SO libspdk_bdev.so.17.0 00:03:09.692 SO libspdk_blobfs.so.10.0 00:03:09.692 SYMLINK libspdk_blobfs.so 00:03:09.950 SYMLINK libspdk_bdev.so 00:03:09.950 CC lib/ftl/ftl_core.o 00:03:09.950 CC lib/ftl/ftl_init.o 00:03:09.950 CC lib/ftl/ftl_layout.o 00:03:09.950 CC lib/ftl/ftl_debug.o 00:03:09.950 CC lib/ublk/ublk.o 00:03:09.950 CC lib/ublk/ublk_rpc.o 00:03:09.950 CC lib/nbd/nbd.o 00:03:09.950 CC lib/nvmf/ctrlr.o 00:03:09.950 CC lib/scsi/dev.o 00:03:09.950 LIB libspdk_lvol.a 00:03:09.950 SO libspdk_lvol.so.10.0 00:03:10.208 SYMLINK libspdk_lvol.so 00:03:10.208 CC lib/scsi/lun.o 00:03:10.208 CC lib/scsi/port.o 00:03:10.208 CC lib/scsi/scsi.o 00:03:10.208 CC lib/scsi/scsi_bdev.o 00:03:10.208 CC lib/scsi/scsi_pr.o 00:03:10.208 CC lib/scsi/scsi_rpc.o 00:03:10.208 CC lib/scsi/task.o 00:03:10.468 CC lib/nvmf/ctrlr_discovery.o 00:03:10.468 CC lib/nvmf/ctrlr_bdev.o 00:03:10.468 CC lib/ftl/ftl_io.o 00:03:10.468 CC lib/nbd/nbd_rpc.o 00:03:10.468 CC lib/ftl/ftl_sb.o 00:03:10.468 CC lib/nvmf/subsystem.o 00:03:10.468 CC lib/ftl/ftl_l2p.o 00:03:10.468 LIB libspdk_nbd.a 00:03:10.468 SO libspdk_nbd.so.7.0 00:03:10.468 LIB libspdk_scsi.a 00:03:10.468 CC lib/nvmf/nvmf.o 00:03:10.468 SYMLINK libspdk_nbd.so 00:03:10.468 SO libspdk_scsi.so.9.0 00:03:10.468 CC lib/ftl/ftl_l2p_flat.o 00:03:10.468 CC lib/ftl/ftl_nv_cache.o 00:03:10.728 CC lib/ftl/ftl_band.o 00:03:10.728 LIB libspdk_ublk.a 00:03:10.728 SYMLINK libspdk_scsi.so 00:03:10.728 CC lib/nvmf/nvmf_rpc.o 00:03:10.728 SO libspdk_ublk.so.3.0 00:03:10.728 SYMLINK libspdk_ublk.so 00:03:10.728 CC lib/nvmf/transport.o 00:03:10.728 CC lib/nvmf/tcp.o 00:03:10.986 CC lib/iscsi/conn.o 00:03:10.986 CC lib/iscsi/init_grp.o 00:03:10.986 CC lib/nvmf/stubs.o 00:03:11.245 CC lib/nvmf/mdns_server.o 00:03:11.245 CC lib/ftl/ftl_band_ops.o 00:03:11.245 CC lib/ftl/ftl_writer.o 00:03:11.505 CC lib/ftl/ftl_rq.o 00:03:11.505 CC lib/ftl/ftl_reloc.o 00:03:11.505 CC lib/nvmf/rdma.o 00:03:11.505 CC lib/ftl/ftl_l2p_cache.o 00:03:11.505 CC lib/ftl/ftl_p2l.o 00:03:11.505 CC lib/iscsi/iscsi.o 00:03:11.505 CC lib/ftl/ftl_p2l_log.o 00:03:11.505 CC lib/nvmf/auth.o 00:03:11.505 CC lib/ftl/mngt/ftl_mngt.o 00:03:11.505 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.765 CC lib/iscsi/param.o 00:03:11.765 CC lib/iscsi/portal_grp.o 00:03:11.765 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:11.765 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:12.025 CC lib/vhost/vhost.o 00:03:12.025 CC lib/vhost/vhost_rpc.o 00:03:12.025 CC lib/vhost/vhost_scsi.o 00:03:12.025 CC lib/vhost/vhost_blk.o 00:03:12.025 CC lib/iscsi/tgt_node.o 00:03:12.025 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:12.284 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.285 CC lib/vhost/rte_vhost_user.o 00:03:12.285 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:12.285 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:12.543 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.544 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.544 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.544 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:12.544 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:12.544 CC lib/iscsi/iscsi_subsystem.o 00:03:12.544 CC lib/iscsi/iscsi_rpc.o 00:03:12.802 CC lib/ftl/utils/ftl_conf.o 00:03:12.802 CC lib/iscsi/task.o 00:03:12.802 CC lib/ftl/utils/ftl_md.o 00:03:12.802 CC lib/ftl/utils/ftl_mempool.o 00:03:12.802 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.802 CC lib/ftl/utils/ftl_property.o 00:03:12.802 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:13.061 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:13.061 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:13.061 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:13.061 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:13.061 LIB libspdk_iscsi.a 00:03:13.061 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:13.061 SO libspdk_iscsi.so.8.0 00:03:13.061 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:13.061 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:13.061 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:13.061 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:13.061 LIB libspdk_vhost.a 00:03:13.061 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:13.061 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:13.320 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:13.320 SO libspdk_vhost.so.8.0 00:03:13.320 LIB libspdk_nvmf.a 00:03:13.320 SYMLINK libspdk_iscsi.so 00:03:13.320 CC lib/ftl/base/ftl_base_dev.o 00:03:13.320 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.320 CC lib/ftl/ftl_trace.o 00:03:13.320 SYMLINK libspdk_vhost.so 00:03:13.320 SO libspdk_nvmf.so.20.0 00:03:13.578 LIB libspdk_ftl.a 00:03:13.578 SYMLINK libspdk_nvmf.so 00:03:13.578 SO libspdk_ftl.so.9.0 00:03:13.837 SYMLINK libspdk_ftl.so 00:03:14.095 CC module/env_dpdk/env_dpdk_rpc.o 00:03:14.095 CC module/sock/posix/posix.o 00:03:14.095 CC module/accel/dsa/accel_dsa.o 00:03:14.095 CC module/accel/ioat/accel_ioat.o 00:03:14.095 CC module/blob/bdev/blob_bdev.o 00:03:14.095 CC module/accel/error/accel_error.o 00:03:14.095 CC module/keyring/file/keyring.o 00:03:14.095 CC module/fsdev/aio/fsdev_aio.o 00:03:14.095 CC module/accel/iaa/accel_iaa.o 00:03:14.095 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:14.353 LIB libspdk_env_dpdk_rpc.a 00:03:14.354 SO libspdk_env_dpdk_rpc.so.6.0 00:03:14.354 CC module/keyring/file/keyring_rpc.o 00:03:14.354 SYMLINK libspdk_env_dpdk_rpc.so 00:03:14.354 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:14.354 CC module/accel/ioat/accel_ioat_rpc.o 00:03:14.354 CC module/accel/iaa/accel_iaa_rpc.o 00:03:14.354 LIB libspdk_scheduler_dynamic.a 00:03:14.354 SO libspdk_scheduler_dynamic.so.4.0 00:03:14.354 CC module/accel/error/accel_error_rpc.o 00:03:14.354 LIB libspdk_blob_bdev.a 00:03:14.354 LIB libspdk_keyring_file.a 00:03:14.354 SO libspdk_blob_bdev.so.11.0 00:03:14.354 LIB libspdk_accel_ioat.a 00:03:14.354 SO libspdk_keyring_file.so.2.0 00:03:14.354 SYMLINK libspdk_scheduler_dynamic.so 00:03:14.354 SO libspdk_accel_ioat.so.6.0 00:03:14.354 LIB libspdk_accel_iaa.a 00:03:14.354 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.354 SYMLINK libspdk_blob_bdev.so 00:03:14.354 SO libspdk_accel_iaa.so.3.0 00:03:14.354 SYMLINK libspdk_keyring_file.so 00:03:14.612 SYMLINK libspdk_accel_ioat.so 00:03:14.612 CC module/fsdev/aio/linux_aio_mgr.o 00:03:14.612 LIB libspdk_accel_error.a 00:03:14.612 SYMLINK libspdk_accel_iaa.so 00:03:14.612 SO libspdk_accel_error.so.2.0 00:03:14.612 LIB libspdk_accel_dsa.a 00:03:14.612 SO libspdk_accel_dsa.so.5.0 00:03:14.612 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:14.612 SYMLINK libspdk_accel_error.so 00:03:14.612 CC module/scheduler/gscheduler/gscheduler.o 00:03:14.612 CC module/keyring/linux/keyring.o 00:03:14.612 SYMLINK libspdk_accel_dsa.so 00:03:14.612 CC module/keyring/linux/keyring_rpc.o 00:03:14.612 CC module/bdev/delay/vbdev_delay.o 00:03:14.612 LIB libspdk_scheduler_gscheduler.a 00:03:14.870 LIB libspdk_fsdev_aio.a 00:03:14.870 SO libspdk_scheduler_gscheduler.so.4.0 00:03:14.870 LIB libspdk_scheduler_dpdk_governor.a 00:03:14.870 CC module/bdev/error/vbdev_error.o 00:03:14.870 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.870 CC module/bdev/gpt/gpt.o 00:03:14.870 SO libspdk_fsdev_aio.so.1.0 00:03:14.870 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:14.870 LIB libspdk_keyring_linux.a 00:03:14.870 SYMLINK libspdk_scheduler_gscheduler.so 00:03:14.870 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.870 SO libspdk_keyring_linux.so.1.0 00:03:14.870 SYMLINK libspdk_fsdev_aio.so 00:03:14.870 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:14.870 LIB libspdk_sock_posix.a 00:03:14.870 SYMLINK libspdk_keyring_linux.so 00:03:14.870 CC module/bdev/lvol/vbdev_lvol.o 00:03:14.870 SO libspdk_sock_posix.so.6.0 00:03:14.870 LIB libspdk_blobfs_bdev.a 00:03:14.870 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.870 CC module/bdev/malloc/bdev_malloc.o 00:03:14.870 CC module/bdev/null/bdev_null.o 00:03:14.870 SO libspdk_blobfs_bdev.so.6.0 00:03:15.150 CC module/bdev/error/vbdev_error_rpc.o 00:03:15.150 SYMLINK libspdk_sock_posix.so 00:03:15.150 CC module/bdev/null/bdev_null_rpc.o 00:03:15.150 SYMLINK libspdk_blobfs_bdev.so 00:03:15.150 CC module/bdev/nvme/bdev_nvme.o 00:03:15.150 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:15.150 CC module/bdev/passthru/vbdev_passthru.o 00:03:15.150 LIB libspdk_bdev_error.a 00:03:15.150 SO libspdk_bdev_error.so.6.0 00:03:15.150 CC module/bdev/raid/bdev_raid.o 00:03:15.150 LIB libspdk_bdev_gpt.a 00:03:15.150 SO libspdk_bdev_gpt.so.6.0 00:03:15.150 SYMLINK libspdk_bdev_error.so 00:03:15.150 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:15.150 LIB libspdk_bdev_null.a 00:03:15.150 SO libspdk_bdev_null.so.6.0 00:03:15.408 LIB libspdk_bdev_delay.a 00:03:15.408 SYMLINK libspdk_bdev_gpt.so 00:03:15.408 SO libspdk_bdev_delay.so.6.0 00:03:15.408 SYMLINK libspdk_bdev_null.so 00:03:15.408 CC module/bdev/split/vbdev_split.o 00:03:15.408 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.408 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:15.408 SYMLINK libspdk_bdev_delay.so 00:03:15.408 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:15.408 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.408 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.408 CC module/bdev/xnvme/bdev_xnvme.o 00:03:15.408 CC module/bdev/aio/bdev_aio.o 00:03:15.408 LIB libspdk_bdev_split.a 00:03:15.408 LIB libspdk_bdev_passthru.a 00:03:15.408 LIB libspdk_bdev_malloc.a 00:03:15.408 SO libspdk_bdev_split.so.6.0 00:03:15.667 SO libspdk_bdev_passthru.so.6.0 00:03:15.667 SO libspdk_bdev_malloc.so.6.0 00:03:15.667 SYMLINK libspdk_bdev_split.so 00:03:15.667 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.667 SYMLINK libspdk_bdev_malloc.so 00:03:15.667 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.667 SYMLINK libspdk_bdev_passthru.so 00:03:15.667 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:15.667 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:15.667 LIB libspdk_bdev_lvol.a 00:03:15.668 SO libspdk_bdev_lvol.so.6.0 00:03:15.668 SYMLINK libspdk_bdev_lvol.so 00:03:15.668 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.668 LIB libspdk_bdev_xnvme.a 00:03:15.668 SO libspdk_bdev_xnvme.so.3.0 00:03:15.668 LIB libspdk_bdev_zone_block.a 00:03:15.668 LIB libspdk_bdev_aio.a 00:03:15.668 SO libspdk_bdev_zone_block.so.6.0 00:03:15.925 SYMLINK libspdk_bdev_xnvme.so 00:03:15.925 SO libspdk_bdev_aio.so.6.0 00:03:15.925 CC module/bdev/nvme/nvme_rpc.o 00:03:15.925 CC module/bdev/ftl/bdev_ftl.o 00:03:15.925 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:15.925 CC module/bdev/iscsi/bdev_iscsi.o 00:03:15.925 SYMLINK libspdk_bdev_zone_block.so 00:03:15.925 SYMLINK libspdk_bdev_aio.so 00:03:15.925 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:15.925 CC module/bdev/raid/raid0.o 00:03:15.925 CC module/bdev/raid/raid1.o 00:03:15.925 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.925 CC module/bdev/nvme/vbdev_opal.o 00:03:15.925 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:16.182 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:16.182 LIB libspdk_bdev_ftl.a 00:03:16.182 SO libspdk_bdev_ftl.so.6.0 00:03:16.182 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:16.182 SYMLINK libspdk_bdev_ftl.so 00:03:16.182 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:16.182 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.182 CC module/bdev/raid/concat.o 00:03:16.182 LIB libspdk_bdev_iscsi.a 00:03:16.182 SO libspdk_bdev_iscsi.so.6.0 00:03:16.182 SYMLINK libspdk_bdev_iscsi.so 00:03:16.441 LIB libspdk_bdev_raid.a 00:03:16.441 SO libspdk_bdev_raid.so.6.0 00:03:16.441 SYMLINK libspdk_bdev_raid.so 00:03:16.700 LIB libspdk_bdev_virtio.a 00:03:16.700 SO libspdk_bdev_virtio.so.6.0 00:03:16.700 SYMLINK libspdk_bdev_virtio.so 00:03:17.266 LIB libspdk_bdev_nvme.a 00:03:17.524 SO libspdk_bdev_nvme.so.7.1 00:03:17.524 SYMLINK libspdk_bdev_nvme.so 00:03:17.781 CC module/event/subsystems/iobuf/iobuf.o 00:03:17.781 CC module/event/subsystems/sock/sock.o 00:03:17.781 CC module/event/subsystems/fsdev/fsdev.o 00:03:17.781 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:17.781 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:17.781 CC module/event/subsystems/scheduler/scheduler.o 00:03:17.781 CC module/event/subsystems/keyring/keyring.o 00:03:17.781 CC module/event/subsystems/vmd/vmd.o 00:03:17.781 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:18.039 LIB libspdk_event_fsdev.a 00:03:18.039 LIB libspdk_event_scheduler.a 00:03:18.039 LIB libspdk_event_sock.a 00:03:18.039 LIB libspdk_event_vhost_blk.a 00:03:18.039 LIB libspdk_event_keyring.a 00:03:18.039 SO libspdk_event_fsdev.so.1.0 00:03:18.039 SO libspdk_event_scheduler.so.4.0 00:03:18.039 SO libspdk_event_sock.so.5.0 00:03:18.039 LIB libspdk_event_iobuf.a 00:03:18.039 SO libspdk_event_vhost_blk.so.3.0 00:03:18.039 SO libspdk_event_keyring.so.1.0 00:03:18.039 LIB libspdk_event_vmd.a 00:03:18.039 SO libspdk_event_iobuf.so.3.0 00:03:18.039 SYMLINK libspdk_event_fsdev.so 00:03:18.039 SO libspdk_event_vmd.so.6.0 00:03:18.039 SYMLINK libspdk_event_scheduler.so 00:03:18.039 SYMLINK libspdk_event_sock.so 00:03:18.039 SYMLINK libspdk_event_vhost_blk.so 00:03:18.039 SYMLINK libspdk_event_keyring.so 00:03:18.039 SYMLINK libspdk_event_iobuf.so 00:03:18.039 SYMLINK libspdk_event_vmd.so 00:03:18.297 CC module/event/subsystems/accel/accel.o 00:03:18.555 LIB libspdk_event_accel.a 00:03:18.555 SO libspdk_event_accel.so.6.0 00:03:18.555 SYMLINK libspdk_event_accel.so 00:03:18.813 CC module/event/subsystems/bdev/bdev.o 00:03:18.813 LIB libspdk_event_bdev.a 00:03:18.813 SO libspdk_event_bdev.so.6.0 00:03:19.071 SYMLINK libspdk_event_bdev.so 00:03:19.071 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:19.071 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:19.071 CC module/event/subsystems/scsi/scsi.o 00:03:19.071 CC module/event/subsystems/ublk/ublk.o 00:03:19.071 CC module/event/subsystems/nbd/nbd.o 00:03:19.334 LIB libspdk_event_ublk.a 00:03:19.334 SO libspdk_event_ublk.so.3.0 00:03:19.334 LIB libspdk_event_scsi.a 00:03:19.334 LIB libspdk_event_nbd.a 00:03:19.334 LIB libspdk_event_nvmf.a 00:03:19.334 SYMLINK libspdk_event_ublk.so 00:03:19.334 SO libspdk_event_nbd.so.6.0 00:03:19.334 SO libspdk_event_scsi.so.6.0 00:03:19.334 SO libspdk_event_nvmf.so.6.0 00:03:19.334 SYMLINK libspdk_event_nbd.so 00:03:19.334 SYMLINK libspdk_event_scsi.so 00:03:19.334 SYMLINK libspdk_event_nvmf.so 00:03:19.600 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:19.600 CC module/event/subsystems/iscsi/iscsi.o 00:03:19.600 LIB libspdk_event_vhost_scsi.a 00:03:19.600 LIB libspdk_event_iscsi.a 00:03:19.600 SO libspdk_event_vhost_scsi.so.3.0 00:03:19.859 SO libspdk_event_iscsi.so.6.0 00:03:19.859 SYMLINK libspdk_event_vhost_scsi.so 00:03:19.859 SYMLINK libspdk_event_iscsi.so 00:03:19.859 SO libspdk.so.6.0 00:03:19.859 SYMLINK libspdk.so 00:03:20.119 CXX app/trace/trace.o 00:03:20.119 CC app/trace_record/trace_record.o 00:03:20.119 TEST_HEADER include/spdk/accel.h 00:03:20.119 TEST_HEADER include/spdk/accel_module.h 00:03:20.119 TEST_HEADER include/spdk/assert.h 00:03:20.119 TEST_HEADER include/spdk/barrier.h 00:03:20.119 TEST_HEADER include/spdk/base64.h 00:03:20.119 TEST_HEADER include/spdk/bdev.h 00:03:20.119 TEST_HEADER include/spdk/bdev_module.h 00:03:20.119 TEST_HEADER include/spdk/bdev_zone.h 00:03:20.119 TEST_HEADER include/spdk/bit_array.h 00:03:20.119 TEST_HEADER include/spdk/bit_pool.h 00:03:20.119 TEST_HEADER include/spdk/blob_bdev.h 00:03:20.119 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:20.119 TEST_HEADER include/spdk/blobfs.h 00:03:20.119 TEST_HEADER include/spdk/blob.h 00:03:20.119 TEST_HEADER include/spdk/conf.h 00:03:20.119 TEST_HEADER include/spdk/config.h 00:03:20.119 TEST_HEADER include/spdk/cpuset.h 00:03:20.119 TEST_HEADER include/spdk/crc16.h 00:03:20.119 TEST_HEADER include/spdk/crc32.h 00:03:20.119 TEST_HEADER include/spdk/crc64.h 00:03:20.119 TEST_HEADER include/spdk/dif.h 00:03:20.119 TEST_HEADER include/spdk/dma.h 00:03:20.119 TEST_HEADER include/spdk/endian.h 00:03:20.119 CC app/nvmf_tgt/nvmf_main.o 00:03:20.119 TEST_HEADER include/spdk/env_dpdk.h 00:03:20.119 CC app/iscsi_tgt/iscsi_tgt.o 00:03:20.119 TEST_HEADER include/spdk/env.h 00:03:20.119 TEST_HEADER include/spdk/event.h 00:03:20.119 TEST_HEADER include/spdk/fd_group.h 00:03:20.119 TEST_HEADER include/spdk/fd.h 00:03:20.119 TEST_HEADER include/spdk/file.h 00:03:20.119 TEST_HEADER include/spdk/fsdev.h 00:03:20.119 TEST_HEADER include/spdk/fsdev_module.h 00:03:20.119 TEST_HEADER include/spdk/ftl.h 00:03:20.119 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:20.119 TEST_HEADER include/spdk/gpt_spec.h 00:03:20.119 TEST_HEADER include/spdk/hexlify.h 00:03:20.119 TEST_HEADER include/spdk/histogram_data.h 00:03:20.119 TEST_HEADER include/spdk/idxd.h 00:03:20.119 CC app/spdk_tgt/spdk_tgt.o 00:03:20.119 TEST_HEADER include/spdk/idxd_spec.h 00:03:20.119 TEST_HEADER include/spdk/init.h 00:03:20.119 TEST_HEADER include/spdk/ioat.h 00:03:20.119 TEST_HEADER include/spdk/ioat_spec.h 00:03:20.119 TEST_HEADER include/spdk/iscsi_spec.h 00:03:20.119 CC test/thread/poller_perf/poller_perf.o 00:03:20.119 TEST_HEADER include/spdk/json.h 00:03:20.119 TEST_HEADER include/spdk/jsonrpc.h 00:03:20.119 TEST_HEADER include/spdk/keyring.h 00:03:20.119 TEST_HEADER include/spdk/keyring_module.h 00:03:20.119 CC examples/util/zipf/zipf.o 00:03:20.119 TEST_HEADER include/spdk/likely.h 00:03:20.119 TEST_HEADER include/spdk/log.h 00:03:20.119 TEST_HEADER include/spdk/lvol.h 00:03:20.119 TEST_HEADER include/spdk/md5.h 00:03:20.119 TEST_HEADER include/spdk/memory.h 00:03:20.119 TEST_HEADER include/spdk/mmio.h 00:03:20.119 TEST_HEADER include/spdk/nbd.h 00:03:20.119 TEST_HEADER include/spdk/net.h 00:03:20.119 TEST_HEADER include/spdk/notify.h 00:03:20.119 TEST_HEADER include/spdk/nvme.h 00:03:20.119 TEST_HEADER include/spdk/nvme_intel.h 00:03:20.119 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:20.119 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:20.119 TEST_HEADER include/spdk/nvme_spec.h 00:03:20.119 TEST_HEADER include/spdk/nvme_zns.h 00:03:20.119 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:20.119 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:20.119 TEST_HEADER include/spdk/nvmf.h 00:03:20.119 CC test/dma/test_dma/test_dma.o 00:03:20.119 TEST_HEADER include/spdk/nvmf_spec.h 00:03:20.119 TEST_HEADER include/spdk/nvmf_transport.h 00:03:20.119 TEST_HEADER include/spdk/opal.h 00:03:20.119 TEST_HEADER include/spdk/opal_spec.h 00:03:20.119 TEST_HEADER include/spdk/pci_ids.h 00:03:20.119 TEST_HEADER include/spdk/pipe.h 00:03:20.119 TEST_HEADER include/spdk/queue.h 00:03:20.119 TEST_HEADER include/spdk/reduce.h 00:03:20.119 TEST_HEADER include/spdk/rpc.h 00:03:20.119 TEST_HEADER include/spdk/scheduler.h 00:03:20.119 TEST_HEADER include/spdk/scsi.h 00:03:20.119 TEST_HEADER include/spdk/scsi_spec.h 00:03:20.119 TEST_HEADER include/spdk/sock.h 00:03:20.120 TEST_HEADER include/spdk/stdinc.h 00:03:20.120 CC test/app/bdev_svc/bdev_svc.o 00:03:20.120 TEST_HEADER include/spdk/string.h 00:03:20.120 TEST_HEADER include/spdk/thread.h 00:03:20.120 TEST_HEADER include/spdk/trace.h 00:03:20.120 TEST_HEADER include/spdk/trace_parser.h 00:03:20.120 TEST_HEADER include/spdk/tree.h 00:03:20.120 TEST_HEADER include/spdk/ublk.h 00:03:20.120 TEST_HEADER include/spdk/util.h 00:03:20.120 TEST_HEADER include/spdk/uuid.h 00:03:20.120 TEST_HEADER include/spdk/version.h 00:03:20.378 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:20.378 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:20.378 TEST_HEADER include/spdk/vhost.h 00:03:20.378 TEST_HEADER include/spdk/vmd.h 00:03:20.378 TEST_HEADER include/spdk/xor.h 00:03:20.378 TEST_HEADER include/spdk/zipf.h 00:03:20.378 CXX test/cpp_headers/accel.o 00:03:20.378 LINK zipf 00:03:20.378 LINK spdk_trace_record 00:03:20.378 LINK poller_perf 00:03:20.378 LINK nvmf_tgt 00:03:20.378 LINK iscsi_tgt 00:03:20.378 LINK spdk_tgt 00:03:20.378 CXX test/cpp_headers/accel_module.o 00:03:20.378 LINK bdev_svc 00:03:20.378 LINK spdk_trace 00:03:20.378 CC test/app/histogram_perf/histogram_perf.o 00:03:20.637 CC examples/ioat/perf/perf.o 00:03:20.637 CXX test/cpp_headers/assert.o 00:03:20.637 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:20.637 CC examples/ioat/verify/verify.o 00:03:20.637 CC app/spdk_nvme_perf/perf.o 00:03:20.637 CC app/spdk_nvme_identify/identify.o 00:03:20.637 CC app/spdk_lspci/spdk_lspci.o 00:03:20.637 LINK histogram_perf 00:03:20.637 CC app/spdk_nvme_discover/discovery_aer.o 00:03:20.637 LINK test_dma 00:03:20.637 CXX test/cpp_headers/barrier.o 00:03:20.637 LINK spdk_lspci 00:03:20.637 LINK verify 00:03:20.637 CXX test/cpp_headers/base64.o 00:03:20.637 LINK ioat_perf 00:03:20.637 LINK spdk_nvme_discover 00:03:20.896 CXX test/cpp_headers/bdev.o 00:03:20.896 LINK nvme_fuzz 00:03:20.896 CC app/spdk_top/spdk_top.o 00:03:20.896 CC test/event/event_perf/event_perf.o 00:03:20.896 CC test/event/reactor/reactor.o 00:03:20.896 CC test/nvme/aer/aer.o 00:03:20.896 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.896 CC test/env/mem_callbacks/mem_callbacks.o 00:03:20.896 CXX test/cpp_headers/bdev_module.o 00:03:21.154 LINK event_perf 00:03:21.154 LINK reactor 00:03:21.154 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:21.154 LINK lsvmd 00:03:21.154 CXX test/cpp_headers/bdev_zone.o 00:03:21.154 LINK aer 00:03:21.154 LINK spdk_nvme_identify 00:03:21.154 CC test/event/reactor_perf/reactor_perf.o 00:03:21.154 CC app/vhost/vhost.o 00:03:21.154 CC examples/vmd/led/led.o 00:03:21.413 CXX test/cpp_headers/bit_array.o 00:03:21.413 CXX test/cpp_headers/bit_pool.o 00:03:21.413 LINK reactor_perf 00:03:21.413 LINK led 00:03:21.413 CC test/nvme/reset/reset.o 00:03:21.413 LINK spdk_nvme_perf 00:03:21.413 LINK vhost 00:03:21.413 CXX test/cpp_headers/blob_bdev.o 00:03:21.413 LINK mem_callbacks 00:03:21.413 CC test/nvme/sgl/sgl.o 00:03:21.413 CC test/event/app_repeat/app_repeat.o 00:03:21.413 CXX test/cpp_headers/blobfs_bdev.o 00:03:21.670 CXX test/cpp_headers/blobfs.o 00:03:21.670 LINK reset 00:03:21.670 LINK app_repeat 00:03:21.670 CC test/env/vtophys/vtophys.o 00:03:21.670 CC examples/idxd/perf/perf.o 00:03:21.670 CC test/rpc_client/rpc_client_test.o 00:03:21.670 CXX test/cpp_headers/blob.o 00:03:21.670 LINK spdk_top 00:03:21.670 LINK sgl 00:03:21.670 LINK vtophys 00:03:21.929 CC test/nvme/e2edp/nvme_dp.o 00:03:21.929 CC test/event/scheduler/scheduler.o 00:03:21.929 CXX test/cpp_headers/conf.o 00:03:21.929 LINK rpc_client_test 00:03:21.929 CC test/accel/dif/dif.o 00:03:21.929 LINK idxd_perf 00:03:21.929 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:21.929 CC app/spdk_dd/spdk_dd.o 00:03:21.929 CXX test/cpp_headers/config.o 00:03:21.929 CXX test/cpp_headers/cpuset.o 00:03:21.929 CC app/fio/nvme/fio_plugin.o 00:03:22.188 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:22.188 LINK nvme_dp 00:03:22.188 LINK env_dpdk_post_init 00:03:22.188 LINK scheduler 00:03:22.188 CXX test/cpp_headers/crc16.o 00:03:22.188 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:22.188 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:22.188 CXX test/cpp_headers/crc32.o 00:03:22.188 LINK spdk_dd 00:03:22.188 CC test/nvme/overhead/overhead.o 00:03:22.188 CC test/env/memory/memory_ut.o 00:03:22.446 CC test/nvme/err_injection/err_injection.o 00:03:22.446 LINK interrupt_tgt 00:03:22.446 CXX test/cpp_headers/crc64.o 00:03:22.446 LINK iscsi_fuzz 00:03:22.446 CC test/nvme/startup/startup.o 00:03:22.446 LINK spdk_nvme 00:03:22.446 LINK overhead 00:03:22.446 LINK err_injection 00:03:22.446 CXX test/cpp_headers/dif.o 00:03:22.446 LINK vhost_fuzz 00:03:22.704 LINK dif 00:03:22.704 LINK startup 00:03:22.704 CXX test/cpp_headers/dma.o 00:03:22.704 CC app/fio/bdev/fio_plugin.o 00:03:22.704 CXX test/cpp_headers/endian.o 00:03:22.704 CC examples/thread/thread/thread_ex.o 00:03:22.704 CC test/app/jsoncat/jsoncat.o 00:03:22.704 CXX test/cpp_headers/env_dpdk.o 00:03:22.704 CC test/nvme/reserve/reserve.o 00:03:22.704 CXX test/cpp_headers/env.o 00:03:22.704 LINK jsoncat 00:03:22.704 CC test/nvme/simple_copy/simple_copy.o 00:03:22.962 CC test/env/pci/pci_ut.o 00:03:22.962 CC test/nvme/connect_stress/connect_stress.o 00:03:22.962 CXX test/cpp_headers/event.o 00:03:22.962 LINK thread 00:03:22.962 CC examples/sock/hello_world/hello_sock.o 00:03:22.962 LINK reserve 00:03:22.962 CC test/app/stub/stub.o 00:03:22.962 LINK simple_copy 00:03:22.962 LINK connect_stress 00:03:22.962 CXX test/cpp_headers/fd_group.o 00:03:22.962 CXX test/cpp_headers/fd.o 00:03:22.962 CXX test/cpp_headers/file.o 00:03:23.220 LINK spdk_bdev 00:03:23.220 LINK stub 00:03:23.220 CXX test/cpp_headers/fsdev.o 00:03:23.220 LINK memory_ut 00:03:23.220 LINK hello_sock 00:03:23.220 LINK pci_ut 00:03:23.220 CXX test/cpp_headers/fsdev_module.o 00:03:23.220 CXX test/cpp_headers/ftl.o 00:03:23.220 CC test/nvme/boot_partition/boot_partition.o 00:03:23.220 CC test/nvme/compliance/nvme_compliance.o 00:03:23.220 CXX test/cpp_headers/fuse_dispatcher.o 00:03:23.479 CXX test/cpp_headers/gpt_spec.o 00:03:23.479 LINK boot_partition 00:03:23.479 CC test/blobfs/mkfs/mkfs.o 00:03:23.479 CXX test/cpp_headers/hexlify.o 00:03:23.479 CC test/lvol/esnap/esnap.o 00:03:23.479 CC test/bdev/bdevio/bdevio.o 00:03:23.479 CC examples/accel/perf/accel_perf.o 00:03:23.479 CC examples/blob/hello_world/hello_blob.o 00:03:23.479 CC test/nvme/fused_ordering/fused_ordering.o 00:03:23.479 LINK nvme_compliance 00:03:23.479 LINK mkfs 00:03:23.479 CXX test/cpp_headers/histogram_data.o 00:03:23.479 CC examples/blob/cli/blobcli.o 00:03:23.479 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:23.737 LINK hello_blob 00:03:23.737 CXX test/cpp_headers/idxd.o 00:03:23.737 LINK fused_ordering 00:03:23.737 CXX test/cpp_headers/idxd_spec.o 00:03:23.737 CC test/nvme/fdp/fdp.o 00:03:23.737 LINK doorbell_aers 00:03:23.737 LINK bdevio 00:03:23.737 CXX test/cpp_headers/init.o 00:03:23.737 CXX test/cpp_headers/ioat.o 00:03:23.737 LINK accel_perf 00:03:23.737 CXX test/cpp_headers/ioat_spec.o 00:03:23.996 CC test/nvme/cuse/cuse.o 00:03:23.996 CXX test/cpp_headers/iscsi_spec.o 00:03:23.996 CXX test/cpp_headers/json.o 00:03:23.996 CXX test/cpp_headers/jsonrpc.o 00:03:23.996 LINK blobcli 00:03:23.996 CXX test/cpp_headers/keyring.o 00:03:23.996 CXX test/cpp_headers/keyring_module.o 00:03:23.996 CC examples/nvme/hello_world/hello_world.o 00:03:23.996 LINK fdp 00:03:23.996 CC examples/nvme/reconnect/reconnect.o 00:03:24.254 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:24.254 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:24.254 CXX test/cpp_headers/likely.o 00:03:24.254 CC examples/bdev/hello_world/hello_bdev.o 00:03:24.254 CC examples/nvme/arbitration/arbitration.o 00:03:24.254 LINK hello_world 00:03:24.254 CC examples/bdev/bdevperf/bdevperf.o 00:03:24.254 CXX test/cpp_headers/log.o 00:03:24.512 LINK hello_bdev 00:03:24.512 LINK hello_fsdev 00:03:24.512 LINK arbitration 00:03:24.512 LINK reconnect 00:03:24.512 CXX test/cpp_headers/lvol.o 00:03:24.512 CC examples/nvme/hotplug/hotplug.o 00:03:24.512 CXX test/cpp_headers/md5.o 00:03:24.512 LINK nvme_manage 00:03:24.512 CXX test/cpp_headers/memory.o 00:03:24.512 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:24.769 CC examples/nvme/abort/abort.o 00:03:24.769 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:24.769 CXX test/cpp_headers/mmio.o 00:03:24.769 CXX test/cpp_headers/nbd.o 00:03:24.769 LINK hotplug 00:03:24.769 CXX test/cpp_headers/net.o 00:03:24.769 CXX test/cpp_headers/notify.o 00:03:24.769 CXX test/cpp_headers/nvme.o 00:03:24.769 LINK pmr_persistence 00:03:24.769 LINK cmb_copy 00:03:24.769 CXX test/cpp_headers/nvme_intel.o 00:03:24.769 CXX test/cpp_headers/nvme_ocssd.o 00:03:24.769 LINK cuse 00:03:25.025 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.025 CXX test/cpp_headers/nvme_spec.o 00:03:25.025 CXX test/cpp_headers/nvme_zns.o 00:03:25.025 LINK abort 00:03:25.025 CXX test/cpp_headers/nvmf_cmd.o 00:03:25.025 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:25.025 CXX test/cpp_headers/nvmf.o 00:03:25.025 CXX test/cpp_headers/nvmf_spec.o 00:03:25.026 CXX test/cpp_headers/nvmf_transport.o 00:03:25.026 CXX test/cpp_headers/opal.o 00:03:25.026 CXX test/cpp_headers/opal_spec.o 00:03:25.026 CXX test/cpp_headers/pci_ids.o 00:03:25.026 CXX test/cpp_headers/pipe.o 00:03:25.026 CXX test/cpp_headers/queue.o 00:03:25.026 LINK bdevperf 00:03:25.026 CXX test/cpp_headers/reduce.o 00:03:25.026 CXX test/cpp_headers/rpc.o 00:03:25.026 CXX test/cpp_headers/scheduler.o 00:03:25.026 CXX test/cpp_headers/scsi.o 00:03:25.281 CXX test/cpp_headers/scsi_spec.o 00:03:25.281 CXX test/cpp_headers/sock.o 00:03:25.281 CXX test/cpp_headers/stdinc.o 00:03:25.281 CXX test/cpp_headers/string.o 00:03:25.281 CXX test/cpp_headers/thread.o 00:03:25.281 CXX test/cpp_headers/trace.o 00:03:25.281 CXX test/cpp_headers/trace_parser.o 00:03:25.281 CXX test/cpp_headers/tree.o 00:03:25.281 CXX test/cpp_headers/ublk.o 00:03:25.281 CXX test/cpp_headers/util.o 00:03:25.281 CXX test/cpp_headers/uuid.o 00:03:25.281 CXX test/cpp_headers/version.o 00:03:25.281 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.281 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.281 CXX test/cpp_headers/vhost.o 00:03:25.281 CXX test/cpp_headers/vmd.o 00:03:25.538 CXX test/cpp_headers/xor.o 00:03:25.539 CXX test/cpp_headers/zipf.o 00:03:25.539 CC examples/nvmf/nvmf/nvmf.o 00:03:25.796 LINK nvmf 00:03:28.324 LINK esnap 00:03:28.894 00:03:28.894 real 1m9.107s 00:03:28.894 user 6m16.844s 00:03:28.894 sys 1m8.538s 00:03:28.894 17:33:52 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:28.894 ************************************ 00:03:28.894 END TEST make 00:03:28.894 17:33:52 make -- common/autotest_common.sh@10 -- $ set +x 00:03:28.894 ************************************ 00:03:28.894 17:33:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:28.894 17:33:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:28.894 17:33:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:28.894 17:33:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.894 17:33:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:28.894 17:33:52 -- pm/common@44 -- $ pid=5057 00:03:28.894 17:33:52 -- pm/common@50 -- $ kill -TERM 5057 00:03:28.894 17:33:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.894 17:33:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:28.894 17:33:52 -- pm/common@44 -- $ pid=5058 00:03:28.894 17:33:52 -- pm/common@50 -- $ kill -TERM 5058 00:03:28.894 17:33:52 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:28.894 17:33:52 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:28.894 17:33:52 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:28.894 17:33:52 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:28.894 17:33:52 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:28.894 17:33:52 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:28.894 17:33:52 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.894 17:33:52 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.894 17:33:52 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.894 17:33:52 -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.894 17:33:52 -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.894 17:33:52 -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.894 17:33:52 -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.894 17:33:52 -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.894 17:33:52 -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.894 17:33:52 -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.894 17:33:52 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:28.894 17:33:52 -- scripts/common.sh@344 -- # case "$op" in 00:03:28.894 17:33:52 -- scripts/common.sh@345 -- # : 1 00:03:28.894 17:33:52 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:28.894 17:33:52 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.894 17:33:52 -- scripts/common.sh@365 -- # decimal 1 00:03:28.894 17:33:52 -- scripts/common.sh@353 -- # local d=1 00:03:28.894 17:33:52 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.894 17:33:52 -- scripts/common.sh@355 -- # echo 1 00:03:28.894 17:33:52 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:28.894 17:33:52 -- scripts/common.sh@366 -- # decimal 2 00:03:28.894 17:33:52 -- scripts/common.sh@353 -- # local d=2 00:03:28.894 17:33:52 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.894 17:33:52 -- scripts/common.sh@355 -- # echo 2 00:03:28.894 17:33:52 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:28.894 17:33:52 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:28.894 17:33:52 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:28.894 17:33:52 -- scripts/common.sh@368 -- # return 0 00:03:28.894 17:33:52 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.894 17:33:52 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:28.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.894 --rc genhtml_branch_coverage=1 00:03:28.894 --rc genhtml_function_coverage=1 00:03:28.894 --rc genhtml_legend=1 00:03:28.894 --rc geninfo_all_blocks=1 00:03:28.894 --rc geninfo_unexecuted_blocks=1 00:03:28.894 00:03:28.894 ' 00:03:28.894 17:33:52 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:28.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.894 --rc genhtml_branch_coverage=1 00:03:28.894 --rc genhtml_function_coverage=1 00:03:28.894 --rc genhtml_legend=1 00:03:28.894 --rc geninfo_all_blocks=1 00:03:28.894 --rc geninfo_unexecuted_blocks=1 00:03:28.894 00:03:28.894 ' 00:03:28.894 17:33:52 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:28.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.894 --rc genhtml_branch_coverage=1 00:03:28.894 --rc genhtml_function_coverage=1 00:03:28.894 --rc genhtml_legend=1 00:03:28.894 --rc geninfo_all_blocks=1 00:03:28.894 --rc geninfo_unexecuted_blocks=1 00:03:28.894 00:03:28.894 ' 00:03:28.894 17:33:52 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:28.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.894 --rc genhtml_branch_coverage=1 00:03:28.894 --rc genhtml_function_coverage=1 00:03:28.894 --rc genhtml_legend=1 00:03:28.894 --rc geninfo_all_blocks=1 00:03:28.894 --rc geninfo_unexecuted_blocks=1 00:03:28.894 00:03:28.894 ' 00:03:28.894 17:33:52 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:28.894 17:33:52 -- nvmf/common.sh@7 -- # uname -s 00:03:28.894 17:33:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:28.894 17:33:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:28.894 17:33:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:28.894 17:33:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:28.894 17:33:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:28.894 17:33:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:28.894 17:33:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:28.894 17:33:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:28.894 17:33:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:28.894 17:33:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:28.894 17:33:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:338f90b6-6028-4f4f-a1c1-7f5cc850a1b2 00:03:28.894 17:33:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=338f90b6-6028-4f4f-a1c1-7f5cc850a1b2 00:03:28.894 17:33:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:28.894 17:33:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:28.894 17:33:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:28.894 17:33:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:28.894 17:33:52 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:28.894 17:33:52 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:28.895 17:33:52 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:28.895 17:33:52 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:28.895 17:33:52 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:28.895 17:33:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.895 17:33:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.895 17:33:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.895 17:33:52 -- paths/export.sh@5 -- # export PATH 00:03:28.895 17:33:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.895 17:33:52 -- nvmf/common.sh@51 -- # : 0 00:03:28.895 17:33:52 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:28.895 17:33:52 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:28.895 17:33:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:28.895 17:33:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:28.895 17:33:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:28.895 17:33:52 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:28.895 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:28.895 17:33:52 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:28.895 17:33:52 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:28.895 17:33:52 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:28.895 17:33:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:28.895 17:33:52 -- spdk/autotest.sh@32 -- # uname -s 00:03:28.895 17:33:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:28.895 17:33:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:28.895 17:33:52 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.895 17:33:52 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:28.895 17:33:52 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.895 17:33:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:28.895 17:33:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:28.895 17:33:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:28.895 17:33:52 -- spdk/autotest.sh@48 -- # udevadm_pid=54272 00:03:28.895 17:33:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:28.895 17:33:52 -- pm/common@17 -- # local monitor 00:03:28.895 17:33:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.895 17:33:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:28.895 17:33:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.895 17:33:52 -- pm/common@25 -- # sleep 1 00:03:28.895 17:33:52 -- pm/common@21 -- # date +%s 00:03:28.895 17:33:52 -- pm/common@21 -- # date +%s 00:03:28.895 17:33:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732124032 00:03:28.895 17:33:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732124032 00:03:28.895 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732124032_collect-cpu-load.pm.log 00:03:28.895 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732124032_collect-vmstat.pm.log 00:03:30.270 17:33:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:30.270 17:33:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:30.270 17:33:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:30.270 17:33:53 -- common/autotest_common.sh@10 -- # set +x 00:03:30.270 17:33:53 -- spdk/autotest.sh@59 -- # create_test_list 00:03:30.270 17:33:53 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:30.270 17:33:53 -- common/autotest_common.sh@10 -- # set +x 00:03:30.270 17:33:53 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:30.270 17:33:53 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:30.270 17:33:53 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:30.270 17:33:53 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:30.270 17:33:53 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:30.270 17:33:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:30.270 17:33:53 -- common/autotest_common.sh@1457 -- # uname 00:03:30.270 17:33:53 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:30.270 17:33:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:30.270 17:33:53 -- common/autotest_common.sh@1477 -- # uname 00:03:30.270 17:33:53 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:30.270 17:33:53 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:30.270 17:33:53 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:30.270 lcov: LCOV version 1.15 00:03:30.270 17:33:53 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:45.156 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:45.156 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:00.069 17:34:22 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:00.069 17:34:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.069 17:34:22 -- common/autotest_common.sh@10 -- # set +x 00:04:00.069 17:34:22 -- spdk/autotest.sh@78 -- # rm -f 00:04:00.069 17:34:22 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.069 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:00.069 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:00.069 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:00.069 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:00.069 17:34:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:00.069 17:34:23 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:00.069 17:34:23 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:00.069 17:34:23 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:00.069 17:34:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.069 17:34:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:00.069 17:34:23 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:00.069 17:34:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.069 17:34:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:00.069 17:34:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:00.069 17:34:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.069 17:34:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:00.069 17:34:23 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:00.069 17:34:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.069 17:34:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:00.069 17:34:23 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:00.069 17:34:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.069 17:34:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:00.069 17:34:23 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:00.069 17:34:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.069 17:34:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:00.069 17:34:23 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:00.069 17:34:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.069 17:34:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:00.069 17:34:23 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:00.069 17:34:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:00.069 17:34:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.069 17:34:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:00.069 17:34:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.069 17:34:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.069 17:34:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:00.069 17:34:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:00.069 17:34:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:00.332 No valid GPT data, bailing 00:04:00.332 17:34:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:00.332 17:34:23 -- scripts/common.sh@394 -- # pt= 00:04:00.332 17:34:23 -- scripts/common.sh@395 -- # return 1 00:04:00.332 17:34:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:00.332 1+0 records in 00:04:00.332 1+0 records out 00:04:00.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026895 s, 39.0 MB/s 00:04:00.332 17:34:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.332 17:34:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.332 17:34:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:00.332 17:34:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:00.332 17:34:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:00.332 No valid GPT data, bailing 00:04:00.332 17:34:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:00.332 17:34:23 -- scripts/common.sh@394 -- # pt= 00:04:00.332 17:34:23 -- scripts/common.sh@395 -- # return 1 00:04:00.332 17:34:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:00.332 1+0 records in 00:04:00.332 1+0 records out 00:04:00.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450141 s, 233 MB/s 00:04:00.332 17:34:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.332 17:34:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.332 17:34:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:00.332 17:34:23 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:00.332 17:34:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:00.332 No valid GPT data, bailing 00:04:00.332 17:34:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:00.332 17:34:23 -- scripts/common.sh@394 -- # pt= 00:04:00.332 17:34:23 -- scripts/common.sh@395 -- # return 1 00:04:00.332 17:34:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:00.332 1+0 records in 00:04:00.332 1+0 records out 00:04:00.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517831 s, 202 MB/s 00:04:00.332 17:34:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.332 17:34:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.332 17:34:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:00.332 17:34:23 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:00.332 17:34:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:00.593 No valid GPT data, bailing 00:04:00.593 17:34:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:00.593 17:34:23 -- scripts/common.sh@394 -- # pt= 00:04:00.593 17:34:23 -- scripts/common.sh@395 -- # return 1 00:04:00.593 17:34:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:00.593 1+0 records in 00:04:00.593 1+0 records out 00:04:00.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511707 s, 205 MB/s 00:04:00.593 17:34:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.593 17:34:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.593 17:34:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:00.593 17:34:23 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:00.593 17:34:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:00.593 No valid GPT data, bailing 00:04:00.593 17:34:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:00.593 17:34:24 -- scripts/common.sh@394 -- # pt= 00:04:00.593 17:34:24 -- scripts/common.sh@395 -- # return 1 00:04:00.593 17:34:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:00.593 1+0 records in 00:04:00.593 1+0 records out 00:04:00.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00713148 s, 147 MB/s 00:04:00.593 17:34:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.593 17:34:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.593 17:34:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:00.593 17:34:24 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:00.593 17:34:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:00.593 No valid GPT data, bailing 00:04:00.593 17:34:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:00.593 17:34:24 -- scripts/common.sh@394 -- # pt= 00:04:00.593 17:34:24 -- scripts/common.sh@395 -- # return 1 00:04:00.593 17:34:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:00.593 1+0 records in 00:04:00.593 1+0 records out 00:04:00.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537849 s, 195 MB/s 00:04:00.593 17:34:24 -- spdk/autotest.sh@105 -- # sync 00:04:00.593 17:34:24 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:00.593 17:34:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:00.593 17:34:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:02.508 17:34:25 -- spdk/autotest.sh@111 -- # uname -s 00:04:02.508 17:34:25 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:02.509 17:34:25 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:02.509 17:34:25 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.341 Hugepages 00:04:03.341 node hugesize free / total 00:04:03.341 node0 1048576kB 0 / 0 00:04:03.341 node0 2048kB 0 / 0 00:04:03.341 00:04:03.341 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.602 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:03.602 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:03.602 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:03.602 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:03.862 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:03.862 17:34:27 -- spdk/autotest.sh@117 -- # uname -s 00:04:03.862 17:34:27 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:03.862 17:34:27 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:03.862 17:34:27 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.692 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.692 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.692 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.953 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.953 17:34:28 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:05.895 17:34:29 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:05.895 17:34:29 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:05.895 17:34:29 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:05.895 17:34:29 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:05.895 17:34:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:05.895 17:34:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:05.895 17:34:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.895 17:34:29 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.895 17:34:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:05.895 17:34:29 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:05.895 17:34:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:05.895 17:34:29 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.462 Waiting for block devices as requested 00:04:06.462 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.722 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.722 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.722 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:12.062 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:12.062 17:34:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.062 17:34:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:12.062 17:34:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.062 17:34:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:12.062 17:34:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:12.062 17:34:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:12.062 17:34:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:12.062 17:34:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:12.062 17:34:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:12.062 17:34:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.062 17:34:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.062 17:34:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.062 17:34:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:12.062 17:34:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.062 17:34:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.062 17:34:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.062 17:34:35 -- common/autotest_common.sh@1543 -- # continue 00:04:12.062 17:34:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.062 17:34:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:12.062 17:34:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.062 17:34:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:12.062 17:34:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:12.062 17:34:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:12.062 17:34:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:12.062 17:34:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:12.062 17:34:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:12.062 17:34:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.062 17:34:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.062 17:34:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.062 17:34:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:12.062 17:34:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.062 17:34:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.062 17:34:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.062 17:34:35 -- common/autotest_common.sh@1543 -- # continue 00:04:12.062 17:34:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.062 17:34:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:12.062 17:34:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.062 17:34:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:12.062 17:34:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:12.062 17:34:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.062 17:34:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.062 17:34:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.062 17:34:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.062 17:34:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.063 17:34:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.063 17:34:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.063 17:34:35 -- common/autotest_common.sh@1543 -- # continue 00:04:12.063 17:34:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.063 17:34:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:12.063 17:34:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.063 17:34:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:12.063 17:34:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:12.063 17:34:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:12.063 17:34:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:12.063 17:34:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:12.063 17:34:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:12.063 17:34:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:12.063 17:34:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:12.063 17:34:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.063 17:34:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.063 17:34:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.063 17:34:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.063 17:34:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.063 17:34:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:12.063 17:34:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.063 17:34:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.063 17:34:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.063 17:34:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.063 17:34:35 -- common/autotest_common.sh@1543 -- # continue 00:04:12.063 17:34:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:12.063 17:34:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.063 17:34:35 -- common/autotest_common.sh@10 -- # set +x 00:04:12.063 17:34:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:12.063 17:34:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.063 17:34:35 -- common/autotest_common.sh@10 -- # set +x 00:04:12.063 17:34:35 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.227 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.227 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.227 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.227 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.227 17:34:36 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:13.227 17:34:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.227 17:34:36 -- common/autotest_common.sh@10 -- # set +x 00:04:13.487 17:34:36 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:13.487 17:34:36 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:13.487 17:34:36 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:13.487 17:34:36 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:13.487 17:34:36 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:13.487 17:34:36 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:13.487 17:34:36 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:13.487 17:34:36 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:13.487 17:34:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:13.487 17:34:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:13.487 17:34:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.487 17:34:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:13.487 17:34:36 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:13.487 17:34:36 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:13.487 17:34:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:13.487 17:34:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:13.487 17:34:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:13.487 17:34:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:13.487 17:34:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:13.487 17:34:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:13.487 17:34:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:13.487 17:34:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:13.487 17:34:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:13.487 17:34:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:13.487 17:34:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:13.487 17:34:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:13.488 17:34:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:13.488 17:34:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:13.488 17:34:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:13.488 17:34:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:13.488 17:34:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:13.488 17:34:36 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:13.488 17:34:36 -- common/autotest_common.sh@1572 -- # return 0 00:04:13.488 17:34:36 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:13.488 17:34:36 -- common/autotest_common.sh@1580 -- # return 0 00:04:13.488 17:34:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:13.488 17:34:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:13.488 17:34:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:13.488 17:34:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:13.488 17:34:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:13.488 17:34:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.488 17:34:36 -- common/autotest_common.sh@10 -- # set +x 00:04:13.488 17:34:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:13.488 17:34:36 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:13.488 17:34:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.488 17:34:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.488 17:34:36 -- common/autotest_common.sh@10 -- # set +x 00:04:13.488 ************************************ 00:04:13.488 START TEST env 00:04:13.488 ************************************ 00:04:13.488 17:34:36 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:13.488 * Looking for test storage... 00:04:13.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:13.488 17:34:36 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:13.488 17:34:36 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:13.488 17:34:36 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:13.748 17:34:37 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:13.748 17:34:37 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.748 17:34:37 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.748 17:34:37 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.748 17:34:37 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.748 17:34:37 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.748 17:34:37 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.748 17:34:37 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.748 17:34:37 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.748 17:34:37 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.748 17:34:37 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.748 17:34:37 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.748 17:34:37 env -- scripts/common.sh@344 -- # case "$op" in 00:04:13.748 17:34:37 env -- scripts/common.sh@345 -- # : 1 00:04:13.748 17:34:37 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.748 17:34:37 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.749 17:34:37 env -- scripts/common.sh@365 -- # decimal 1 00:04:13.749 17:34:37 env -- scripts/common.sh@353 -- # local d=1 00:04:13.749 17:34:37 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.749 17:34:37 env -- scripts/common.sh@355 -- # echo 1 00:04:13.749 17:34:37 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.749 17:34:37 env -- scripts/common.sh@366 -- # decimal 2 00:04:13.749 17:34:37 env -- scripts/common.sh@353 -- # local d=2 00:04:13.749 17:34:37 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.749 17:34:37 env -- scripts/common.sh@355 -- # echo 2 00:04:13.749 17:34:37 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.749 17:34:37 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.749 17:34:37 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.749 17:34:37 env -- scripts/common.sh@368 -- # return 0 00:04:13.749 17:34:37 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.749 17:34:37 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:13.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.749 --rc genhtml_branch_coverage=1 00:04:13.749 --rc genhtml_function_coverage=1 00:04:13.749 --rc genhtml_legend=1 00:04:13.749 --rc geninfo_all_blocks=1 00:04:13.749 --rc geninfo_unexecuted_blocks=1 00:04:13.749 00:04:13.749 ' 00:04:13.749 17:34:37 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:13.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.749 --rc genhtml_branch_coverage=1 00:04:13.749 --rc genhtml_function_coverage=1 00:04:13.749 --rc genhtml_legend=1 00:04:13.749 --rc geninfo_all_blocks=1 00:04:13.749 --rc geninfo_unexecuted_blocks=1 00:04:13.749 00:04:13.749 ' 00:04:13.749 17:34:37 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:13.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.749 --rc genhtml_branch_coverage=1 00:04:13.749 --rc genhtml_function_coverage=1 00:04:13.749 --rc genhtml_legend=1 00:04:13.749 --rc geninfo_all_blocks=1 00:04:13.749 --rc geninfo_unexecuted_blocks=1 00:04:13.749 00:04:13.749 ' 00:04:13.749 17:34:37 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:13.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.749 --rc genhtml_branch_coverage=1 00:04:13.749 --rc genhtml_function_coverage=1 00:04:13.749 --rc genhtml_legend=1 00:04:13.749 --rc geninfo_all_blocks=1 00:04:13.749 --rc geninfo_unexecuted_blocks=1 00:04:13.749 00:04:13.749 ' 00:04:13.749 17:34:37 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:13.749 17:34:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.749 17:34:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.749 17:34:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.749 ************************************ 00:04:13.749 START TEST env_memory 00:04:13.749 ************************************ 00:04:13.749 17:34:37 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:13.749 00:04:13.749 00:04:13.749 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.749 http://cunit.sourceforge.net/ 00:04:13.749 00:04:13.749 00:04:13.749 Suite: memory 00:04:13.749 Test: alloc and free memory map ...[2024-11-20 17:34:37.149526] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:13.749 passed 00:04:13.749 Test: mem map translation ...[2024-11-20 17:34:37.189095] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:13.749 [2024-11-20 17:34:37.189326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:13.749 [2024-11-20 17:34:37.189453] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:13.749 [2024-11-20 17:34:37.189495] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:13.749 passed 00:04:13.749 Test: mem map registration ...[2024-11-20 17:34:37.258063] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:13.749 [2024-11-20 17:34:37.258255] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:13.749 passed 00:04:14.008 Test: mem map adjacent registrations ...passed 00:04:14.008 00:04:14.008 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.008 suites 1 1 n/a 0 0 00:04:14.008 tests 4 4 4 0 0 00:04:14.008 asserts 152 152 152 0 n/a 00:04:14.008 00:04:14.008 Elapsed time = 0.234 seconds 00:04:14.008 00:04:14.008 ************************************ 00:04:14.008 END TEST env_memory 00:04:14.008 ************************************ 00:04:14.008 real 0m0.271s 00:04:14.008 user 0m0.240s 00:04:14.008 sys 0m0.022s 00:04:14.008 17:34:37 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.008 17:34:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:14.008 17:34:37 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:14.008 17:34:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.008 17:34:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.008 17:34:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.008 ************************************ 00:04:14.008 START TEST env_vtophys 00:04:14.008 ************************************ 00:04:14.008 17:34:37 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:14.008 EAL: lib.eal log level changed from notice to debug 00:04:14.008 EAL: Detected lcore 0 as core 0 on socket 0 00:04:14.008 EAL: Detected lcore 1 as core 0 on socket 0 00:04:14.008 EAL: Detected lcore 2 as core 0 on socket 0 00:04:14.008 EAL: Detected lcore 3 as core 0 on socket 0 00:04:14.008 EAL: Detected lcore 4 as core 0 on socket 0 00:04:14.008 EAL: Detected lcore 5 as core 0 on socket 0 00:04:14.008 EAL: Detected lcore 6 as core 0 on socket 0 00:04:14.008 EAL: Detected lcore 7 as core 0 on socket 0 00:04:14.008 EAL: Detected lcore 8 as core 0 on socket 0 00:04:14.008 EAL: Detected lcore 9 as core 0 on socket 0 00:04:14.008 EAL: Maximum logical cores by configuration: 128 00:04:14.008 EAL: Detected CPU lcores: 10 00:04:14.008 EAL: Detected NUMA nodes: 1 00:04:14.008 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:14.008 EAL: Detected shared linkage of DPDK 00:04:14.008 EAL: No shared files mode enabled, IPC will be disabled 00:04:14.008 EAL: Selected IOVA mode 'PA' 00:04:14.008 EAL: Probing VFIO support... 00:04:14.008 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:14.008 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:14.008 EAL: Ask a virtual area of 0x2e000 bytes 00:04:14.008 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:14.008 EAL: Setting up physically contiguous memory... 00:04:14.008 EAL: Setting maximum number of open files to 524288 00:04:14.008 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:14.008 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:14.008 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.008 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:14.008 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.008 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.008 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:14.008 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:14.008 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.008 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:14.008 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.008 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.008 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:14.008 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:14.008 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.008 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:14.008 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.008 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.008 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:14.008 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:14.008 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.008 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:14.008 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.008 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.008 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:14.008 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:14.008 EAL: Hugepages will be freed exactly as allocated. 00:04:14.008 EAL: No shared files mode enabled, IPC is disabled 00:04:14.008 EAL: No shared files mode enabled, IPC is disabled 00:04:14.268 EAL: TSC frequency is ~2600000 KHz 00:04:14.268 EAL: Main lcore 0 is ready (tid=7f4d466c2a40;cpuset=[0]) 00:04:14.268 EAL: Trying to obtain current memory policy. 00:04:14.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.268 EAL: Restoring previous memory policy: 0 00:04:14.268 EAL: request: mp_malloc_sync 00:04:14.268 EAL: No shared files mode enabled, IPC is disabled 00:04:14.268 EAL: Heap on socket 0 was expanded by 2MB 00:04:14.268 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:14.268 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:14.268 EAL: Mem event callback 'spdk:(nil)' registered 00:04:14.268 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:14.268 00:04:14.268 00:04:14.268 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.268 http://cunit.sourceforge.net/ 00:04:14.268 00:04:14.268 00:04:14.268 Suite: components_suite 00:04:14.529 Test: vtophys_malloc_test ...passed 00:04:14.529 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:14.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.529 EAL: Restoring previous memory policy: 4 00:04:14.529 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.529 EAL: request: mp_malloc_sync 00:04:14.529 EAL: No shared files mode enabled, IPC is disabled 00:04:14.529 EAL: Heap on socket 0 was expanded by 4MB 00:04:14.529 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.529 EAL: request: mp_malloc_sync 00:04:14.529 EAL: No shared files mode enabled, IPC is disabled 00:04:14.529 EAL: Heap on socket 0 was shrunk by 4MB 00:04:14.529 EAL: Trying to obtain current memory policy. 00:04:14.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.529 EAL: Restoring previous memory policy: 4 00:04:14.529 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.529 EAL: request: mp_malloc_sync 00:04:14.529 EAL: No shared files mode enabled, IPC is disabled 00:04:14.529 EAL: Heap on socket 0 was expanded by 6MB 00:04:14.529 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.789 EAL: request: mp_malloc_sync 00:04:14.790 EAL: No shared files mode enabled, IPC is disabled 00:04:14.790 EAL: Heap on socket 0 was shrunk by 6MB 00:04:14.790 EAL: Trying to obtain current memory policy. 00:04:14.790 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.790 EAL: Restoring previous memory policy: 4 00:04:14.790 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.790 EAL: request: mp_malloc_sync 00:04:14.790 EAL: No shared files mode enabled, IPC is disabled 00:04:14.790 EAL: Heap on socket 0 was expanded by 10MB 00:04:14.790 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.790 EAL: request: mp_malloc_sync 00:04:14.790 EAL: No shared files mode enabled, IPC is disabled 00:04:14.790 EAL: Heap on socket 0 was shrunk by 10MB 00:04:14.790 EAL: Trying to obtain current memory policy. 00:04:14.790 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.790 EAL: Restoring previous memory policy: 4 00:04:14.790 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.790 EAL: request: mp_malloc_sync 00:04:14.790 EAL: No shared files mode enabled, IPC is disabled 00:04:14.790 EAL: Heap on socket 0 was expanded by 18MB 00:04:14.790 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.790 EAL: request: mp_malloc_sync 00:04:14.790 EAL: No shared files mode enabled, IPC is disabled 00:04:14.790 EAL: Heap on socket 0 was shrunk by 18MB 00:04:14.790 EAL: Trying to obtain current memory policy. 00:04:14.790 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.790 EAL: Restoring previous memory policy: 4 00:04:14.790 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.790 EAL: request: mp_malloc_sync 00:04:14.790 EAL: No shared files mode enabled, IPC is disabled 00:04:14.790 EAL: Heap on socket 0 was expanded by 34MB 00:04:14.790 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.790 EAL: request: mp_malloc_sync 00:04:14.790 EAL: No shared files mode enabled, IPC is disabled 00:04:14.790 EAL: Heap on socket 0 was shrunk by 34MB 00:04:14.790 EAL: Trying to obtain current memory policy. 00:04:14.790 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.790 EAL: Restoring previous memory policy: 4 00:04:14.790 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.790 EAL: request: mp_malloc_sync 00:04:14.790 EAL: No shared files mode enabled, IPC is disabled 00:04:14.790 EAL: Heap on socket 0 was expanded by 66MB 00:04:14.790 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.790 EAL: request: mp_malloc_sync 00:04:14.790 EAL: No shared files mode enabled, IPC is disabled 00:04:14.790 EAL: Heap on socket 0 was shrunk by 66MB 00:04:15.050 EAL: Trying to obtain current memory policy. 00:04:15.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.051 EAL: Restoring previous memory policy: 4 00:04:15.051 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.051 EAL: request: mp_malloc_sync 00:04:15.051 EAL: No shared files mode enabled, IPC is disabled 00:04:15.051 EAL: Heap on socket 0 was expanded by 130MB 00:04:15.051 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.312 EAL: request: mp_malloc_sync 00:04:15.312 EAL: No shared files mode enabled, IPC is disabled 00:04:15.312 EAL: Heap on socket 0 was shrunk by 130MB 00:04:15.312 EAL: Trying to obtain current memory policy. 00:04:15.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.312 EAL: Restoring previous memory policy: 4 00:04:15.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.312 EAL: request: mp_malloc_sync 00:04:15.312 EAL: No shared files mode enabled, IPC is disabled 00:04:15.312 EAL: Heap on socket 0 was expanded by 258MB 00:04:15.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.853 EAL: request: mp_malloc_sync 00:04:15.853 EAL: No shared files mode enabled, IPC is disabled 00:04:15.853 EAL: Heap on socket 0 was shrunk by 258MB 00:04:16.115 EAL: Trying to obtain current memory policy. 00:04:16.115 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.115 EAL: Restoring previous memory policy: 4 00:04:16.115 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.115 EAL: request: mp_malloc_sync 00:04:16.115 EAL: No shared files mode enabled, IPC is disabled 00:04:16.115 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.688 EAL: request: mp_malloc_sync 00:04:16.688 EAL: No shared files mode enabled, IPC is disabled 00:04:16.688 EAL: Heap on socket 0 was shrunk by 514MB 00:04:17.260 EAL: Trying to obtain current memory policy. 00:04:17.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.565 EAL: Restoring previous memory policy: 4 00:04:17.566 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.566 EAL: request: mp_malloc_sync 00:04:17.566 EAL: No shared files mode enabled, IPC is disabled 00:04:17.566 EAL: Heap on socket 0 was expanded by 1026MB 00:04:18.950 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.950 EAL: request: mp_malloc_sync 00:04:18.950 EAL: No shared files mode enabled, IPC is disabled 00:04:18.950 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:20.338 passed 00:04:20.338 00:04:20.338 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.338 suites 1 1 n/a 0 0 00:04:20.338 tests 2 2 2 0 0 00:04:20.338 asserts 5824 5824 5824 0 n/a 00:04:20.338 00:04:20.338 Elapsed time = 5.779 seconds 00:04:20.338 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.338 EAL: request: mp_malloc_sync 00:04:20.338 EAL: No shared files mode enabled, IPC is disabled 00:04:20.338 EAL: Heap on socket 0 was shrunk by 2MB 00:04:20.338 EAL: No shared files mode enabled, IPC is disabled 00:04:20.338 EAL: No shared files mode enabled, IPC is disabled 00:04:20.338 EAL: No shared files mode enabled, IPC is disabled 00:04:20.338 00:04:20.338 real 0m6.096s 00:04:20.338 user 0m4.932s 00:04:20.338 sys 0m0.989s 00:04:20.338 17:34:43 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.338 ************************************ 00:04:20.338 END TEST env_vtophys 00:04:20.338 17:34:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:20.338 ************************************ 00:04:20.338 17:34:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:20.338 17:34:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.338 17:34:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.338 17:34:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.338 ************************************ 00:04:20.338 START TEST env_pci 00:04:20.338 ************************************ 00:04:20.338 17:34:43 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:20.338 00:04:20.338 00:04:20.339 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.339 http://cunit.sourceforge.net/ 00:04:20.339 00:04:20.339 00:04:20.339 Suite: pci 00:04:20.339 Test: pci_hook ...[2024-11-20 17:34:43.643099] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57042 has claimed it 00:04:20.339 EAL: Cannot find device (10000:00:01.0) 00:04:20.339 EAL: Failed to attach device on primary process 00:04:20.339 passed 00:04:20.339 00:04:20.339 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.339 suites 1 1 n/a 0 0 00:04:20.339 tests 1 1 1 0 0 00:04:20.339 asserts 25 25 25 0 n/a 00:04:20.339 00:04:20.339 Elapsed time = 0.008 seconds 00:04:20.339 00:04:20.339 real 0m0.073s 00:04:20.339 user 0m0.029s 00:04:20.339 sys 0m0.040s 00:04:20.339 17:34:43 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.339 17:34:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:20.339 ************************************ 00:04:20.339 END TEST env_pci 00:04:20.339 ************************************ 00:04:20.339 17:34:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:20.339 17:34:43 env -- env/env.sh@15 -- # uname 00:04:20.339 17:34:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:20.339 17:34:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:20.339 17:34:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.339 17:34:43 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:20.339 17:34:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.339 17:34:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.339 ************************************ 00:04:20.339 START TEST env_dpdk_post_init 00:04:20.339 ************************************ 00:04:20.339 17:34:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.339 EAL: Detected CPU lcores: 10 00:04:20.339 EAL: Detected NUMA nodes: 1 00:04:20.339 EAL: Detected shared linkage of DPDK 00:04:20.339 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.339 EAL: Selected IOVA mode 'PA' 00:04:20.600 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.600 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:20.600 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:20.600 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:20.600 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:20.600 Starting DPDK initialization... 00:04:20.600 Starting SPDK post initialization... 00:04:20.600 SPDK NVMe probe 00:04:20.600 Attaching to 0000:00:10.0 00:04:20.600 Attaching to 0000:00:11.0 00:04:20.600 Attaching to 0000:00:12.0 00:04:20.600 Attaching to 0000:00:13.0 00:04:20.600 Attached to 0000:00:13.0 00:04:20.600 Attached to 0000:00:10.0 00:04:20.600 Attached to 0000:00:11.0 00:04:20.600 Attached to 0000:00:12.0 00:04:20.600 Cleaning up... 00:04:20.600 00:04:20.600 real 0m0.282s 00:04:20.600 user 0m0.092s 00:04:20.600 sys 0m0.090s 00:04:20.600 17:34:44 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.600 17:34:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.600 ************************************ 00:04:20.600 END TEST env_dpdk_post_init 00:04:20.600 ************************************ 00:04:20.600 17:34:44 env -- env/env.sh@26 -- # uname 00:04:20.600 17:34:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:20.600 17:34:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.600 17:34:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.600 17:34:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.600 17:34:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.600 ************************************ 00:04:20.600 START TEST env_mem_callbacks 00:04:20.600 ************************************ 00:04:20.600 17:34:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.862 EAL: Detected CPU lcores: 10 00:04:20.862 EAL: Detected NUMA nodes: 1 00:04:20.862 EAL: Detected shared linkage of DPDK 00:04:20.862 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.862 EAL: Selected IOVA mode 'PA' 00:04:20.862 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.862 00:04:20.862 00:04:20.862 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.862 http://cunit.sourceforge.net/ 00:04:20.862 00:04:20.862 00:04:20.862 Suite: memory 00:04:20.862 Test: test ... 00:04:20.862 register 0x200000200000 2097152 00:04:20.862 malloc 3145728 00:04:20.862 register 0x200000400000 4194304 00:04:20.862 buf 0x2000004fffc0 len 3145728 PASSED 00:04:20.862 malloc 64 00:04:20.862 buf 0x2000004ffec0 len 64 PASSED 00:04:20.862 malloc 4194304 00:04:20.862 register 0x200000800000 6291456 00:04:20.862 buf 0x2000009fffc0 len 4194304 PASSED 00:04:20.862 free 0x2000004fffc0 3145728 00:04:20.862 free 0x2000004ffec0 64 00:04:20.862 unregister 0x200000400000 4194304 PASSED 00:04:20.862 free 0x2000009fffc0 4194304 00:04:20.862 unregister 0x200000800000 6291456 PASSED 00:04:20.862 malloc 8388608 00:04:20.862 register 0x200000400000 10485760 00:04:20.862 buf 0x2000005fffc0 len 8388608 PASSED 00:04:20.862 free 0x2000005fffc0 8388608 00:04:20.862 unregister 0x200000400000 10485760 PASSED 00:04:20.862 passed 00:04:20.862 00:04:20.862 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.862 suites 1 1 n/a 0 0 00:04:20.862 tests 1 1 1 0 0 00:04:20.862 asserts 15 15 15 0 n/a 00:04:20.862 00:04:20.862 Elapsed time = 0.049 seconds 00:04:20.862 00:04:20.862 real 0m0.243s 00:04:20.862 user 0m0.073s 00:04:20.862 sys 0m0.064s 00:04:20.862 17:34:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.862 17:34:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:20.862 ************************************ 00:04:20.862 END TEST env_mem_callbacks 00:04:20.862 ************************************ 00:04:21.124 ************************************ 00:04:21.124 END TEST env 00:04:21.124 ************************************ 00:04:21.124 00:04:21.124 real 0m7.509s 00:04:21.124 user 0m5.532s 00:04:21.124 sys 0m1.451s 00:04:21.124 17:34:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.124 17:34:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.124 17:34:44 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:21.124 17:34:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.124 17:34:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.124 17:34:44 -- common/autotest_common.sh@10 -- # set +x 00:04:21.124 ************************************ 00:04:21.124 START TEST rpc 00:04:21.124 ************************************ 00:04:21.124 17:34:44 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:21.124 * Looking for test storage... 00:04:21.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.124 17:34:44 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.124 17:34:44 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.124 17:34:44 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.124 17:34:44 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.124 17:34:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.124 17:34:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.124 17:34:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.124 17:34:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.124 17:34:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.124 17:34:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.124 17:34:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.124 17:34:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.124 17:34:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.124 17:34:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.124 17:34:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.124 17:34:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.124 17:34:44 rpc -- scripts/common.sh@345 -- # : 1 00:04:21.124 17:34:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.124 17:34:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.124 17:34:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.124 17:34:44 rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.124 17:34:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.124 17:34:44 rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.124 17:34:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.124 17:34:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.124 17:34:44 rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.124 17:34:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.124 17:34:44 rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.124 17:34:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.124 17:34:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.124 17:34:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.124 17:34:44 rpc -- scripts/common.sh@368 -- # return 0 00:04:21.124 17:34:44 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.124 17:34:44 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.124 --rc genhtml_branch_coverage=1 00:04:21.124 --rc genhtml_function_coverage=1 00:04:21.124 --rc genhtml_legend=1 00:04:21.124 --rc geninfo_all_blocks=1 00:04:21.124 --rc geninfo_unexecuted_blocks=1 00:04:21.124 00:04:21.124 ' 00:04:21.124 17:34:44 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.124 --rc genhtml_branch_coverage=1 00:04:21.124 --rc genhtml_function_coverage=1 00:04:21.125 --rc genhtml_legend=1 00:04:21.125 --rc geninfo_all_blocks=1 00:04:21.125 --rc geninfo_unexecuted_blocks=1 00:04:21.125 00:04:21.125 ' 00:04:21.125 17:34:44 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.125 --rc genhtml_branch_coverage=1 00:04:21.125 --rc genhtml_function_coverage=1 00:04:21.125 --rc genhtml_legend=1 00:04:21.125 --rc geninfo_all_blocks=1 00:04:21.125 --rc geninfo_unexecuted_blocks=1 00:04:21.125 00:04:21.125 ' 00:04:21.125 17:34:44 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.125 --rc genhtml_branch_coverage=1 00:04:21.125 --rc genhtml_function_coverage=1 00:04:21.125 --rc genhtml_legend=1 00:04:21.125 --rc geninfo_all_blocks=1 00:04:21.125 --rc geninfo_unexecuted_blocks=1 00:04:21.125 00:04:21.125 ' 00:04:21.125 17:34:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57169 00:04:21.125 17:34:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.125 17:34:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57169 00:04:21.125 17:34:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 57169 ']' 00:04:21.125 17:34:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.125 17:34:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.125 17:34:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.125 17:34:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.125 17:34:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:21.125 17:34:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.386 [2024-11-20 17:34:44.752432] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:04:21.386 [2024-11-20 17:34:44.752942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57169 ] 00:04:21.386 [2024-11-20 17:34:44.923752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.648 [2024-11-20 17:34:45.059577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:21.648 [2024-11-20 17:34:45.059856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57169' to capture a snapshot of events at runtime. 00:04:21.648 [2024-11-20 17:34:45.060078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:21.648 [2024-11-20 17:34:45.060117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:21.648 [2024-11-20 17:34:45.060137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57169 for offline analysis/debug. 00:04:21.648 [2024-11-20 17:34:45.061138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.594 17:34:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.594 17:34:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:22.594 17:34:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:22.594 17:34:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:22.594 17:34:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:22.594 17:34:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:22.594 17:34:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.594 17:34:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.594 17:34:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.594 ************************************ 00:04:22.594 START TEST rpc_integrity 00:04:22.594 ************************************ 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:22.594 { 00:04:22.594 "name": "Malloc0", 00:04:22.594 "aliases": [ 00:04:22.594 "2cf73ad5-651e-4bcf-97e5-314001e4517e" 00:04:22.594 ], 00:04:22.594 "product_name": "Malloc disk", 00:04:22.594 "block_size": 512, 00:04:22.594 "num_blocks": 16384, 00:04:22.594 "uuid": "2cf73ad5-651e-4bcf-97e5-314001e4517e", 00:04:22.594 "assigned_rate_limits": { 00:04:22.594 "rw_ios_per_sec": 0, 00:04:22.594 "rw_mbytes_per_sec": 0, 00:04:22.594 "r_mbytes_per_sec": 0, 00:04:22.594 "w_mbytes_per_sec": 0 00:04:22.594 }, 00:04:22.594 "claimed": false, 00:04:22.594 "zoned": false, 00:04:22.594 "supported_io_types": { 00:04:22.594 "read": true, 00:04:22.594 "write": true, 00:04:22.594 "unmap": true, 00:04:22.594 "flush": true, 00:04:22.594 "reset": true, 00:04:22.594 "nvme_admin": false, 00:04:22.594 "nvme_io": false, 00:04:22.594 "nvme_io_md": false, 00:04:22.594 "write_zeroes": true, 00:04:22.594 "zcopy": true, 00:04:22.594 "get_zone_info": false, 00:04:22.594 "zone_management": false, 00:04:22.594 "zone_append": false, 00:04:22.594 "compare": false, 00:04:22.594 "compare_and_write": false, 00:04:22.594 "abort": true, 00:04:22.594 "seek_hole": false, 00:04:22.594 "seek_data": false, 00:04:22.594 "copy": true, 00:04:22.594 "nvme_iov_md": false 00:04:22.594 }, 00:04:22.594 "memory_domains": [ 00:04:22.594 { 00:04:22.594 "dma_device_id": "system", 00:04:22.594 "dma_device_type": 1 00:04:22.594 }, 00:04:22.594 { 00:04:22.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.594 "dma_device_type": 2 00:04:22.594 } 00:04:22.594 ], 00:04:22.594 "driver_specific": {} 00:04:22.594 } 00:04:22.594 ]' 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.594 [2024-11-20 17:34:45.922464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:22.594 [2024-11-20 17:34:45.922547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:22.594 [2024-11-20 17:34:45.922579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:22.594 [2024-11-20 17:34:45.922593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:22.594 [2024-11-20 17:34:45.925204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:22.594 [2024-11-20 17:34:45.925420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:22.594 Passthru0 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.594 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.594 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:22.594 { 00:04:22.594 "name": "Malloc0", 00:04:22.594 "aliases": [ 00:04:22.594 "2cf73ad5-651e-4bcf-97e5-314001e4517e" 00:04:22.594 ], 00:04:22.594 "product_name": "Malloc disk", 00:04:22.594 "block_size": 512, 00:04:22.594 "num_blocks": 16384, 00:04:22.594 "uuid": "2cf73ad5-651e-4bcf-97e5-314001e4517e", 00:04:22.594 "assigned_rate_limits": { 00:04:22.594 "rw_ios_per_sec": 0, 00:04:22.594 "rw_mbytes_per_sec": 0, 00:04:22.594 "r_mbytes_per_sec": 0, 00:04:22.594 "w_mbytes_per_sec": 0 00:04:22.594 }, 00:04:22.594 "claimed": true, 00:04:22.594 "claim_type": "exclusive_write", 00:04:22.594 "zoned": false, 00:04:22.594 "supported_io_types": { 00:04:22.594 "read": true, 00:04:22.594 "write": true, 00:04:22.594 "unmap": true, 00:04:22.594 "flush": true, 00:04:22.594 "reset": true, 00:04:22.594 "nvme_admin": false, 00:04:22.595 "nvme_io": false, 00:04:22.595 "nvme_io_md": false, 00:04:22.595 "write_zeroes": true, 00:04:22.595 "zcopy": true, 00:04:22.595 "get_zone_info": false, 00:04:22.595 "zone_management": false, 00:04:22.595 "zone_append": false, 00:04:22.595 "compare": false, 00:04:22.595 "compare_and_write": false, 00:04:22.595 "abort": true, 00:04:22.595 "seek_hole": false, 00:04:22.595 "seek_data": false, 00:04:22.595 "copy": true, 00:04:22.595 "nvme_iov_md": false 00:04:22.595 }, 00:04:22.595 "memory_domains": [ 00:04:22.595 { 00:04:22.595 "dma_device_id": "system", 00:04:22.595 "dma_device_type": 1 00:04:22.595 }, 00:04:22.595 { 00:04:22.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.595 "dma_device_type": 2 00:04:22.595 } 00:04:22.595 ], 00:04:22.595 "driver_specific": {} 00:04:22.595 }, 00:04:22.595 { 00:04:22.595 "name": "Passthru0", 00:04:22.595 "aliases": [ 00:04:22.595 "d27dbb42-2680-5e73-a396-2d4e56aa023e" 00:04:22.595 ], 00:04:22.595 "product_name": "passthru", 00:04:22.595 "block_size": 512, 00:04:22.595 "num_blocks": 16384, 00:04:22.595 "uuid": "d27dbb42-2680-5e73-a396-2d4e56aa023e", 00:04:22.595 "assigned_rate_limits": { 00:04:22.595 "rw_ios_per_sec": 0, 00:04:22.595 "rw_mbytes_per_sec": 0, 00:04:22.595 "r_mbytes_per_sec": 0, 00:04:22.595 "w_mbytes_per_sec": 0 00:04:22.595 }, 00:04:22.595 "claimed": false, 00:04:22.595 "zoned": false, 00:04:22.595 "supported_io_types": { 00:04:22.595 "read": true, 00:04:22.595 "write": true, 00:04:22.595 "unmap": true, 00:04:22.595 "flush": true, 00:04:22.595 "reset": true, 00:04:22.595 "nvme_admin": false, 00:04:22.595 "nvme_io": false, 00:04:22.595 "nvme_io_md": false, 00:04:22.595 "write_zeroes": true, 00:04:22.595 "zcopy": true, 00:04:22.595 "get_zone_info": false, 00:04:22.595 "zone_management": false, 00:04:22.595 "zone_append": false, 00:04:22.595 "compare": false, 00:04:22.595 "compare_and_write": false, 00:04:22.595 "abort": true, 00:04:22.595 "seek_hole": false, 00:04:22.595 "seek_data": false, 00:04:22.595 "copy": true, 00:04:22.595 "nvme_iov_md": false 00:04:22.595 }, 00:04:22.595 "memory_domains": [ 00:04:22.595 { 00:04:22.595 "dma_device_id": "system", 00:04:22.595 "dma_device_type": 1 00:04:22.595 }, 00:04:22.595 { 00:04:22.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.595 "dma_device_type": 2 00:04:22.595 } 00:04:22.595 ], 00:04:22.595 "driver_specific": { 00:04:22.595 "passthru": { 00:04:22.595 "name": "Passthru0", 00:04:22.595 "base_bdev_name": "Malloc0" 00:04:22.595 } 00:04:22.595 } 00:04:22.595 } 00:04:22.595 ]' 00:04:22.595 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:22.595 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:22.595 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:22.595 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.595 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.595 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.595 17:34:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:22.595 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.595 17:34:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.595 17:34:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.595 17:34:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:22.595 17:34:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.595 17:34:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.595 17:34:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.595 17:34:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:22.595 17:34:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:22.595 ************************************ 00:04:22.595 END TEST rpc_integrity 00:04:22.595 ************************************ 00:04:22.595 17:34:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:22.595 00:04:22.595 real 0m0.267s 00:04:22.595 user 0m0.130s 00:04:22.595 sys 0m0.047s 00:04:22.595 17:34:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.595 17:34:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.595 17:34:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:22.595 17:34:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.595 17:34:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.595 17:34:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.857 ************************************ 00:04:22.857 START TEST rpc_plugins 00:04:22.857 ************************************ 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:22.857 17:34:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.857 17:34:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:22.857 17:34:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.857 17:34:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:22.857 { 00:04:22.857 "name": "Malloc1", 00:04:22.857 "aliases": [ 00:04:22.857 "123c2fde-1407-4a02-8759-59b043ec48af" 00:04:22.857 ], 00:04:22.857 "product_name": "Malloc disk", 00:04:22.857 "block_size": 4096, 00:04:22.857 "num_blocks": 256, 00:04:22.857 "uuid": "123c2fde-1407-4a02-8759-59b043ec48af", 00:04:22.857 "assigned_rate_limits": { 00:04:22.857 "rw_ios_per_sec": 0, 00:04:22.857 "rw_mbytes_per_sec": 0, 00:04:22.857 "r_mbytes_per_sec": 0, 00:04:22.857 "w_mbytes_per_sec": 0 00:04:22.857 }, 00:04:22.857 "claimed": false, 00:04:22.857 "zoned": false, 00:04:22.857 "supported_io_types": { 00:04:22.857 "read": true, 00:04:22.857 "write": true, 00:04:22.857 "unmap": true, 00:04:22.857 "flush": true, 00:04:22.857 "reset": true, 00:04:22.857 "nvme_admin": false, 00:04:22.857 "nvme_io": false, 00:04:22.857 "nvme_io_md": false, 00:04:22.857 "write_zeroes": true, 00:04:22.857 "zcopy": true, 00:04:22.857 "get_zone_info": false, 00:04:22.857 "zone_management": false, 00:04:22.857 "zone_append": false, 00:04:22.857 "compare": false, 00:04:22.857 "compare_and_write": false, 00:04:22.857 "abort": true, 00:04:22.857 "seek_hole": false, 00:04:22.857 "seek_data": false, 00:04:22.857 "copy": true, 00:04:22.857 "nvme_iov_md": false 00:04:22.857 }, 00:04:22.857 "memory_domains": [ 00:04:22.857 { 00:04:22.857 "dma_device_id": "system", 00:04:22.857 "dma_device_type": 1 00:04:22.857 }, 00:04:22.857 { 00:04:22.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.857 "dma_device_type": 2 00:04:22.857 } 00:04:22.857 ], 00:04:22.857 "driver_specific": {} 00:04:22.857 } 00:04:22.857 ]' 00:04:22.857 17:34:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:22.857 17:34:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:22.857 17:34:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.857 17:34:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.857 17:34:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:22.857 17:34:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:22.857 ************************************ 00:04:22.857 END TEST rpc_plugins 00:04:22.857 ************************************ 00:04:22.857 17:34:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:22.857 00:04:22.857 real 0m0.122s 00:04:22.857 user 0m0.068s 00:04:22.857 sys 0m0.014s 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.857 17:34:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.857 17:34:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:22.857 17:34:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.857 17:34:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.857 17:34:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.857 ************************************ 00:04:22.857 START TEST rpc_trace_cmd_test 00:04:22.857 ************************************ 00:04:22.857 17:34:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:22.857 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:22.857 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:22.857 17:34:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.857 17:34:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:22.857 17:34:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.857 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:22.857 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57169", 00:04:22.857 "tpoint_group_mask": "0x8", 00:04:22.857 "iscsi_conn": { 00:04:22.857 "mask": "0x2", 00:04:22.857 "tpoint_mask": "0x0" 00:04:22.857 }, 00:04:22.857 "scsi": { 00:04:22.857 "mask": "0x4", 00:04:22.857 "tpoint_mask": "0x0" 00:04:22.857 }, 00:04:22.857 "bdev": { 00:04:22.857 "mask": "0x8", 00:04:22.857 "tpoint_mask": "0xffffffffffffffff" 00:04:22.857 }, 00:04:22.857 "nvmf_rdma": { 00:04:22.857 "mask": "0x10", 00:04:22.857 "tpoint_mask": "0x0" 00:04:22.857 }, 00:04:22.857 "nvmf_tcp": { 00:04:22.857 "mask": "0x20", 00:04:22.857 "tpoint_mask": "0x0" 00:04:22.857 }, 00:04:22.857 "ftl": { 00:04:22.857 "mask": "0x40", 00:04:22.857 "tpoint_mask": "0x0" 00:04:22.857 }, 00:04:22.857 "blobfs": { 00:04:22.858 "mask": "0x80", 00:04:22.858 "tpoint_mask": "0x0" 00:04:22.858 }, 00:04:22.858 "dsa": { 00:04:22.858 "mask": "0x200", 00:04:22.858 "tpoint_mask": "0x0" 00:04:22.858 }, 00:04:22.858 "thread": { 00:04:22.858 "mask": "0x400", 00:04:22.858 "tpoint_mask": "0x0" 00:04:22.858 }, 00:04:22.858 "nvme_pcie": { 00:04:22.858 "mask": "0x800", 00:04:22.858 "tpoint_mask": "0x0" 00:04:22.858 }, 00:04:22.858 "iaa": { 00:04:22.858 "mask": "0x1000", 00:04:22.858 "tpoint_mask": "0x0" 00:04:22.858 }, 00:04:22.858 "nvme_tcp": { 00:04:22.858 "mask": "0x2000", 00:04:22.858 "tpoint_mask": "0x0" 00:04:22.858 }, 00:04:22.858 "bdev_nvme": { 00:04:22.858 "mask": "0x4000", 00:04:22.858 "tpoint_mask": "0x0" 00:04:22.858 }, 00:04:22.858 "sock": { 00:04:22.858 "mask": "0x8000", 00:04:22.858 "tpoint_mask": "0x0" 00:04:22.858 }, 00:04:22.858 "blob": { 00:04:22.858 "mask": "0x10000", 00:04:22.858 "tpoint_mask": "0x0" 00:04:22.858 }, 00:04:22.858 "bdev_raid": { 00:04:22.858 "mask": "0x20000", 00:04:22.858 "tpoint_mask": "0x0" 00:04:22.858 }, 00:04:22.858 "scheduler": { 00:04:22.858 "mask": "0x40000", 00:04:22.858 "tpoint_mask": "0x0" 00:04:22.858 } 00:04:22.858 }' 00:04:22.858 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:22.858 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:22.858 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:23.119 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:23.119 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:23.119 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:23.119 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:23.119 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:23.119 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:23.119 ************************************ 00:04:23.119 END TEST rpc_trace_cmd_test 00:04:23.119 ************************************ 00:04:23.119 17:34:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:23.119 00:04:23.119 real 0m0.176s 00:04:23.119 user 0m0.139s 00:04:23.119 sys 0m0.027s 00:04:23.119 17:34:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.119 17:34:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.119 17:34:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:23.119 17:34:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:23.119 17:34:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:23.119 17:34:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.119 17:34:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.119 17:34:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.119 ************************************ 00:04:23.119 START TEST rpc_daemon_integrity 00:04:23.119 ************************************ 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.119 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.383 { 00:04:23.383 "name": "Malloc2", 00:04:23.383 "aliases": [ 00:04:23.383 "80661963-6965-403d-a790-a3cc34c13762" 00:04:23.383 ], 00:04:23.383 "product_name": "Malloc disk", 00:04:23.383 "block_size": 512, 00:04:23.383 "num_blocks": 16384, 00:04:23.383 "uuid": "80661963-6965-403d-a790-a3cc34c13762", 00:04:23.383 "assigned_rate_limits": { 00:04:23.383 "rw_ios_per_sec": 0, 00:04:23.383 "rw_mbytes_per_sec": 0, 00:04:23.383 "r_mbytes_per_sec": 0, 00:04:23.383 "w_mbytes_per_sec": 0 00:04:23.383 }, 00:04:23.383 "claimed": false, 00:04:23.383 "zoned": false, 00:04:23.383 "supported_io_types": { 00:04:23.383 "read": true, 00:04:23.383 "write": true, 00:04:23.383 "unmap": true, 00:04:23.383 "flush": true, 00:04:23.383 "reset": true, 00:04:23.383 "nvme_admin": false, 00:04:23.383 "nvme_io": false, 00:04:23.383 "nvme_io_md": false, 00:04:23.383 "write_zeroes": true, 00:04:23.383 "zcopy": true, 00:04:23.383 "get_zone_info": false, 00:04:23.383 "zone_management": false, 00:04:23.383 "zone_append": false, 00:04:23.383 "compare": false, 00:04:23.383 "compare_and_write": false, 00:04:23.383 "abort": true, 00:04:23.383 "seek_hole": false, 00:04:23.383 "seek_data": false, 00:04:23.383 "copy": true, 00:04:23.383 "nvme_iov_md": false 00:04:23.383 }, 00:04:23.383 "memory_domains": [ 00:04:23.383 { 00:04:23.383 "dma_device_id": "system", 00:04:23.383 "dma_device_type": 1 00:04:23.383 }, 00:04:23.383 { 00:04:23.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.383 "dma_device_type": 2 00:04:23.383 } 00:04:23.383 ], 00:04:23.383 "driver_specific": {} 00:04:23.383 } 00:04:23.383 ]' 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.383 [2024-11-20 17:34:46.693499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:23.383 [2024-11-20 17:34:46.693576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.383 [2024-11-20 17:34:46.693601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:23.383 [2024-11-20 17:34:46.693613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.383 [2024-11-20 17:34:46.696229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.383 [2024-11-20 17:34:46.696448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.383 Passthru0 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.383 { 00:04:23.383 "name": "Malloc2", 00:04:23.383 "aliases": [ 00:04:23.383 "80661963-6965-403d-a790-a3cc34c13762" 00:04:23.383 ], 00:04:23.383 "product_name": "Malloc disk", 00:04:23.383 "block_size": 512, 00:04:23.383 "num_blocks": 16384, 00:04:23.383 "uuid": "80661963-6965-403d-a790-a3cc34c13762", 00:04:23.383 "assigned_rate_limits": { 00:04:23.383 "rw_ios_per_sec": 0, 00:04:23.383 "rw_mbytes_per_sec": 0, 00:04:23.383 "r_mbytes_per_sec": 0, 00:04:23.383 "w_mbytes_per_sec": 0 00:04:23.383 }, 00:04:23.383 "claimed": true, 00:04:23.383 "claim_type": "exclusive_write", 00:04:23.383 "zoned": false, 00:04:23.383 "supported_io_types": { 00:04:23.383 "read": true, 00:04:23.383 "write": true, 00:04:23.383 "unmap": true, 00:04:23.383 "flush": true, 00:04:23.383 "reset": true, 00:04:23.383 "nvme_admin": false, 00:04:23.383 "nvme_io": false, 00:04:23.383 "nvme_io_md": false, 00:04:23.383 "write_zeroes": true, 00:04:23.383 "zcopy": true, 00:04:23.383 "get_zone_info": false, 00:04:23.383 "zone_management": false, 00:04:23.383 "zone_append": false, 00:04:23.383 "compare": false, 00:04:23.383 "compare_and_write": false, 00:04:23.383 "abort": true, 00:04:23.383 "seek_hole": false, 00:04:23.383 "seek_data": false, 00:04:23.383 "copy": true, 00:04:23.383 "nvme_iov_md": false 00:04:23.383 }, 00:04:23.383 "memory_domains": [ 00:04:23.383 { 00:04:23.383 "dma_device_id": "system", 00:04:23.383 "dma_device_type": 1 00:04:23.383 }, 00:04:23.383 { 00:04:23.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.383 "dma_device_type": 2 00:04:23.383 } 00:04:23.383 ], 00:04:23.383 "driver_specific": {} 00:04:23.383 }, 00:04:23.383 { 00:04:23.383 "name": "Passthru0", 00:04:23.383 "aliases": [ 00:04:23.383 "2c140838-526e-578a-afae-0ee4aa789ce3" 00:04:23.383 ], 00:04:23.383 "product_name": "passthru", 00:04:23.383 "block_size": 512, 00:04:23.383 "num_blocks": 16384, 00:04:23.383 "uuid": "2c140838-526e-578a-afae-0ee4aa789ce3", 00:04:23.383 "assigned_rate_limits": { 00:04:23.383 "rw_ios_per_sec": 0, 00:04:23.383 "rw_mbytes_per_sec": 0, 00:04:23.383 "r_mbytes_per_sec": 0, 00:04:23.383 "w_mbytes_per_sec": 0 00:04:23.383 }, 00:04:23.383 "claimed": false, 00:04:23.383 "zoned": false, 00:04:23.383 "supported_io_types": { 00:04:23.383 "read": true, 00:04:23.383 "write": true, 00:04:23.383 "unmap": true, 00:04:23.383 "flush": true, 00:04:23.383 "reset": true, 00:04:23.383 "nvme_admin": false, 00:04:23.383 "nvme_io": false, 00:04:23.383 "nvme_io_md": false, 00:04:23.383 "write_zeroes": true, 00:04:23.383 "zcopy": true, 00:04:23.383 "get_zone_info": false, 00:04:23.383 "zone_management": false, 00:04:23.383 "zone_append": false, 00:04:23.383 "compare": false, 00:04:23.383 "compare_and_write": false, 00:04:23.383 "abort": true, 00:04:23.383 "seek_hole": false, 00:04:23.383 "seek_data": false, 00:04:23.383 "copy": true, 00:04:23.383 "nvme_iov_md": false 00:04:23.383 }, 00:04:23.383 "memory_domains": [ 00:04:23.383 { 00:04:23.383 "dma_device_id": "system", 00:04:23.383 "dma_device_type": 1 00:04:23.383 }, 00:04:23.383 { 00:04:23.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.383 "dma_device_type": 2 00:04:23.383 } 00:04:23.383 ], 00:04:23.383 "driver_specific": { 00:04:23.383 "passthru": { 00:04:23.383 "name": "Passthru0", 00:04:23.383 "base_bdev_name": "Malloc2" 00:04:23.383 } 00:04:23.383 } 00:04:23.383 } 00:04:23.383 ]' 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.383 ************************************ 00:04:23.383 END TEST rpc_daemon_integrity 00:04:23.383 ************************************ 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.383 00:04:23.383 real 0m0.258s 00:04:23.383 user 0m0.126s 00:04:23.383 sys 0m0.041s 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.383 17:34:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.383 17:34:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:23.383 17:34:46 rpc -- rpc/rpc.sh@84 -- # killprocess 57169 00:04:23.383 17:34:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 57169 ']' 00:04:23.383 17:34:46 rpc -- common/autotest_common.sh@958 -- # kill -0 57169 00:04:23.383 17:34:46 rpc -- common/autotest_common.sh@959 -- # uname 00:04:23.384 17:34:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.384 17:34:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57169 00:04:23.644 killing process with pid 57169 00:04:23.644 17:34:46 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.645 17:34:46 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.645 17:34:46 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57169' 00:04:23.645 17:34:46 rpc -- common/autotest_common.sh@973 -- # kill 57169 00:04:23.645 17:34:46 rpc -- common/autotest_common.sh@978 -- # wait 57169 00:04:25.087 00:04:25.087 real 0m4.123s 00:04:25.087 user 0m4.381s 00:04:25.087 sys 0m0.862s 00:04:25.087 17:34:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.087 17:34:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.087 ************************************ 00:04:25.087 END TEST rpc 00:04:25.087 ************************************ 00:04:25.348 17:34:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.348 17:34:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.348 17:34:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.348 17:34:48 -- common/autotest_common.sh@10 -- # set +x 00:04:25.348 ************************************ 00:04:25.348 START TEST skip_rpc 00:04:25.348 ************************************ 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.349 * Looking for test storage... 00:04:25.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.349 17:34:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:25.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.349 --rc genhtml_branch_coverage=1 00:04:25.349 --rc genhtml_function_coverage=1 00:04:25.349 --rc genhtml_legend=1 00:04:25.349 --rc geninfo_all_blocks=1 00:04:25.349 --rc geninfo_unexecuted_blocks=1 00:04:25.349 00:04:25.349 ' 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:25.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.349 --rc genhtml_branch_coverage=1 00:04:25.349 --rc genhtml_function_coverage=1 00:04:25.349 --rc genhtml_legend=1 00:04:25.349 --rc geninfo_all_blocks=1 00:04:25.349 --rc geninfo_unexecuted_blocks=1 00:04:25.349 00:04:25.349 ' 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:25.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.349 --rc genhtml_branch_coverage=1 00:04:25.349 --rc genhtml_function_coverage=1 00:04:25.349 --rc genhtml_legend=1 00:04:25.349 --rc geninfo_all_blocks=1 00:04:25.349 --rc geninfo_unexecuted_blocks=1 00:04:25.349 00:04:25.349 ' 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:25.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.349 --rc genhtml_branch_coverage=1 00:04:25.349 --rc genhtml_function_coverage=1 00:04:25.349 --rc genhtml_legend=1 00:04:25.349 --rc geninfo_all_blocks=1 00:04:25.349 --rc geninfo_unexecuted_blocks=1 00:04:25.349 00:04:25.349 ' 00:04:25.349 17:34:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.349 17:34:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:25.349 17:34:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.349 17:34:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.349 ************************************ 00:04:25.349 START TEST skip_rpc 00:04:25.349 ************************************ 00:04:25.349 17:34:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:25.349 17:34:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57387 00:04:25.349 17:34:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.349 17:34:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:25.349 17:34:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:25.609 [2024-11-20 17:34:48.948655] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:04:25.609 [2024-11-20 17:34:48.949056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57387 ] 00:04:25.609 [2024-11-20 17:34:49.114326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.870 [2024-11-20 17:34:49.254123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57387 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57387 ']' 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57387 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57387 00:04:31.162 killing process with pid 57387 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57387' 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57387 00:04:31.162 17:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57387 00:04:32.105 00:04:32.105 real 0m6.738s 00:04:32.105 user 0m6.214s 00:04:32.105 sys 0m0.395s 00:04:32.105 17:34:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.105 17:34:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.105 ************************************ 00:04:32.105 END TEST skip_rpc 00:04:32.105 ************************************ 00:04:32.366 17:34:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:32.366 17:34:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.366 17:34:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.366 17:34:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.366 ************************************ 00:04:32.366 START TEST skip_rpc_with_json 00:04:32.366 ************************************ 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:32.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57485 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57485 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57485 ']' 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.366 17:34:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.366 [2024-11-20 17:34:55.753204] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:04:32.366 [2024-11-20 17:34:55.753602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57485 ] 00:04:32.627 [2024-11-20 17:34:55.917899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.627 [2024-11-20 17:34:56.057543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.573 [2024-11-20 17:34:56.754643] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:33.573 request: 00:04:33.573 { 00:04:33.573 "trtype": "tcp", 00:04:33.573 "method": "nvmf_get_transports", 00:04:33.573 "req_id": 1 00:04:33.573 } 00:04:33.573 Got JSON-RPC error response 00:04:33.573 response: 00:04:33.573 { 00:04:33.573 "code": -19, 00:04:33.573 "message": "No such device" 00:04:33.573 } 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.573 [2024-11-20 17:34:56.766810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.573 17:34:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:33.573 { 00:04:33.573 "subsystems": [ 00:04:33.573 { 00:04:33.573 "subsystem": "fsdev", 00:04:33.573 "config": [ 00:04:33.573 { 00:04:33.573 "method": "fsdev_set_opts", 00:04:33.573 "params": { 00:04:33.573 "fsdev_io_pool_size": 65535, 00:04:33.573 "fsdev_io_cache_size": 256 00:04:33.573 } 00:04:33.573 } 00:04:33.573 ] 00:04:33.573 }, 00:04:33.573 { 00:04:33.573 "subsystem": "keyring", 00:04:33.573 "config": [] 00:04:33.573 }, 00:04:33.573 { 00:04:33.573 "subsystem": "iobuf", 00:04:33.573 "config": [ 00:04:33.573 { 00:04:33.573 "method": "iobuf_set_options", 00:04:33.573 "params": { 00:04:33.573 "small_pool_count": 8192, 00:04:33.573 "large_pool_count": 1024, 00:04:33.573 "small_bufsize": 8192, 00:04:33.573 "large_bufsize": 135168, 00:04:33.573 "enable_numa": false 00:04:33.573 } 00:04:33.573 } 00:04:33.573 ] 00:04:33.573 }, 00:04:33.573 { 00:04:33.573 "subsystem": "sock", 00:04:33.573 "config": [ 00:04:33.573 { 00:04:33.573 "method": "sock_set_default_impl", 00:04:33.573 "params": { 00:04:33.573 "impl_name": "posix" 00:04:33.573 } 00:04:33.573 }, 00:04:33.573 { 00:04:33.573 "method": "sock_impl_set_options", 00:04:33.573 "params": { 00:04:33.573 "impl_name": "ssl", 00:04:33.573 "recv_buf_size": 4096, 00:04:33.573 "send_buf_size": 4096, 00:04:33.573 "enable_recv_pipe": true, 00:04:33.573 "enable_quickack": false, 00:04:33.573 "enable_placement_id": 0, 00:04:33.573 "enable_zerocopy_send_server": true, 00:04:33.573 "enable_zerocopy_send_client": false, 00:04:33.573 "zerocopy_threshold": 0, 00:04:33.573 "tls_version": 0, 00:04:33.573 "enable_ktls": false 00:04:33.573 } 00:04:33.573 }, 00:04:33.573 { 00:04:33.573 "method": "sock_impl_set_options", 00:04:33.573 "params": { 00:04:33.573 "impl_name": "posix", 00:04:33.573 "recv_buf_size": 2097152, 00:04:33.573 "send_buf_size": 2097152, 00:04:33.573 "enable_recv_pipe": true, 00:04:33.573 "enable_quickack": false, 00:04:33.573 "enable_placement_id": 0, 00:04:33.573 "enable_zerocopy_send_server": true, 00:04:33.573 "enable_zerocopy_send_client": false, 00:04:33.573 "zerocopy_threshold": 0, 00:04:33.573 "tls_version": 0, 00:04:33.573 "enable_ktls": false 00:04:33.573 } 00:04:33.573 } 00:04:33.573 ] 00:04:33.573 }, 00:04:33.573 { 00:04:33.573 "subsystem": "vmd", 00:04:33.573 "config": [] 00:04:33.573 }, 00:04:33.573 { 00:04:33.573 "subsystem": "accel", 00:04:33.573 "config": [ 00:04:33.573 { 00:04:33.573 "method": "accel_set_options", 00:04:33.573 "params": { 00:04:33.573 "small_cache_size": 128, 00:04:33.573 "large_cache_size": 16, 00:04:33.573 "task_count": 2048, 00:04:33.573 "sequence_count": 2048, 00:04:33.573 "buf_count": 2048 00:04:33.573 } 00:04:33.574 } 00:04:33.574 ] 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "subsystem": "bdev", 00:04:33.574 "config": [ 00:04:33.574 { 00:04:33.574 "method": "bdev_set_options", 00:04:33.574 "params": { 00:04:33.574 "bdev_io_pool_size": 65535, 00:04:33.574 "bdev_io_cache_size": 256, 00:04:33.574 "bdev_auto_examine": true, 00:04:33.574 "iobuf_small_cache_size": 128, 00:04:33.574 "iobuf_large_cache_size": 16 00:04:33.574 } 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "method": "bdev_raid_set_options", 00:04:33.574 "params": { 00:04:33.574 "process_window_size_kb": 1024, 00:04:33.574 "process_max_bandwidth_mb_sec": 0 00:04:33.574 } 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "method": "bdev_iscsi_set_options", 00:04:33.574 "params": { 00:04:33.574 "timeout_sec": 30 00:04:33.574 } 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "method": "bdev_nvme_set_options", 00:04:33.574 "params": { 00:04:33.574 "action_on_timeout": "none", 00:04:33.574 "timeout_us": 0, 00:04:33.574 "timeout_admin_us": 0, 00:04:33.574 "keep_alive_timeout_ms": 10000, 00:04:33.574 "arbitration_burst": 0, 00:04:33.574 "low_priority_weight": 0, 00:04:33.574 "medium_priority_weight": 0, 00:04:33.574 "high_priority_weight": 0, 00:04:33.574 "nvme_adminq_poll_period_us": 10000, 00:04:33.574 "nvme_ioq_poll_period_us": 0, 00:04:33.574 "io_queue_requests": 0, 00:04:33.574 "delay_cmd_submit": true, 00:04:33.574 "transport_retry_count": 4, 00:04:33.574 "bdev_retry_count": 3, 00:04:33.574 "transport_ack_timeout": 0, 00:04:33.574 "ctrlr_loss_timeout_sec": 0, 00:04:33.574 "reconnect_delay_sec": 0, 00:04:33.574 "fast_io_fail_timeout_sec": 0, 00:04:33.574 "disable_auto_failback": false, 00:04:33.574 "generate_uuids": false, 00:04:33.574 "transport_tos": 0, 00:04:33.574 "nvme_error_stat": false, 00:04:33.574 "rdma_srq_size": 0, 00:04:33.574 "io_path_stat": false, 00:04:33.574 "allow_accel_sequence": false, 00:04:33.574 "rdma_max_cq_size": 0, 00:04:33.574 "rdma_cm_event_timeout_ms": 0, 00:04:33.574 "dhchap_digests": [ 00:04:33.574 "sha256", 00:04:33.574 "sha384", 00:04:33.574 "sha512" 00:04:33.574 ], 00:04:33.574 "dhchap_dhgroups": [ 00:04:33.574 "null", 00:04:33.574 "ffdhe2048", 00:04:33.574 "ffdhe3072", 00:04:33.574 "ffdhe4096", 00:04:33.574 "ffdhe6144", 00:04:33.574 "ffdhe8192" 00:04:33.574 ] 00:04:33.574 } 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "method": "bdev_nvme_set_hotplug", 00:04:33.574 "params": { 00:04:33.574 "period_us": 100000, 00:04:33.574 "enable": false 00:04:33.574 } 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "method": "bdev_wait_for_examine" 00:04:33.574 } 00:04:33.574 ] 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "subsystem": "scsi", 00:04:33.574 "config": null 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "subsystem": "scheduler", 00:04:33.574 "config": [ 00:04:33.574 { 00:04:33.574 "method": "framework_set_scheduler", 00:04:33.574 "params": { 00:04:33.574 "name": "static" 00:04:33.574 } 00:04:33.574 } 00:04:33.574 ] 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "subsystem": "vhost_scsi", 00:04:33.574 "config": [] 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "subsystem": "vhost_blk", 00:04:33.574 "config": [] 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "subsystem": "ublk", 00:04:33.574 "config": [] 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "subsystem": "nbd", 00:04:33.574 "config": [] 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "subsystem": "nvmf", 00:04:33.574 "config": [ 00:04:33.574 { 00:04:33.574 "method": "nvmf_set_config", 00:04:33.574 "params": { 00:04:33.574 "discovery_filter": "match_any", 00:04:33.574 "admin_cmd_passthru": { 00:04:33.574 "identify_ctrlr": false 00:04:33.574 }, 00:04:33.574 "dhchap_digests": [ 00:04:33.574 "sha256", 00:04:33.574 "sha384", 00:04:33.574 "sha512" 00:04:33.574 ], 00:04:33.574 "dhchap_dhgroups": [ 00:04:33.574 "null", 00:04:33.574 "ffdhe2048", 00:04:33.574 "ffdhe3072", 00:04:33.574 "ffdhe4096", 00:04:33.574 "ffdhe6144", 00:04:33.574 "ffdhe8192" 00:04:33.574 ] 00:04:33.574 } 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "method": "nvmf_set_max_subsystems", 00:04:33.574 "params": { 00:04:33.574 "max_subsystems": 1024 00:04:33.574 } 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "method": "nvmf_set_crdt", 00:04:33.574 "params": { 00:04:33.574 "crdt1": 0, 00:04:33.574 "crdt2": 0, 00:04:33.574 "crdt3": 0 00:04:33.574 } 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "method": "nvmf_create_transport", 00:04:33.574 "params": { 00:04:33.574 "trtype": "TCP", 00:04:33.574 "max_queue_depth": 128, 00:04:33.574 "max_io_qpairs_per_ctrlr": 127, 00:04:33.574 "in_capsule_data_size": 4096, 00:04:33.574 "max_io_size": 131072, 00:04:33.574 "io_unit_size": 131072, 00:04:33.574 "max_aq_depth": 128, 00:04:33.574 "num_shared_buffers": 511, 00:04:33.574 "buf_cache_size": 4294967295, 00:04:33.574 "dif_insert_or_strip": false, 00:04:33.574 "zcopy": false, 00:04:33.574 "c2h_success": true, 00:04:33.574 "sock_priority": 0, 00:04:33.574 "abort_timeout_sec": 1, 00:04:33.574 "ack_timeout": 0, 00:04:33.574 "data_wr_pool_size": 0 00:04:33.574 } 00:04:33.574 } 00:04:33.574 ] 00:04:33.574 }, 00:04:33.574 { 00:04:33.574 "subsystem": "iscsi", 00:04:33.574 "config": [ 00:04:33.574 { 00:04:33.574 "method": "iscsi_set_options", 00:04:33.574 "params": { 00:04:33.574 "node_base": "iqn.2016-06.io.spdk", 00:04:33.574 "max_sessions": 128, 00:04:33.574 "max_connections_per_session": 2, 00:04:33.574 "max_queue_depth": 64, 00:04:33.574 "default_time2wait": 2, 00:04:33.574 "default_time2retain": 20, 00:04:33.574 "first_burst_length": 8192, 00:04:33.574 "immediate_data": true, 00:04:33.574 "allow_duplicated_isid": false, 00:04:33.574 "error_recovery_level": 0, 00:04:33.574 "nop_timeout": 60, 00:04:33.574 "nop_in_interval": 30, 00:04:33.574 "disable_chap": false, 00:04:33.574 "require_chap": false, 00:04:33.574 "mutual_chap": false, 00:04:33.574 "chap_group": 0, 00:04:33.574 "max_large_datain_per_connection": 64, 00:04:33.574 "max_r2t_per_connection": 4, 00:04:33.574 "pdu_pool_size": 36864, 00:04:33.574 "immediate_data_pool_size": 16384, 00:04:33.574 "data_out_pool_size": 2048 00:04:33.574 } 00:04:33.574 } 00:04:33.574 ] 00:04:33.574 } 00:04:33.574 ] 00:04:33.574 } 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57485 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57485 ']' 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57485 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57485 00:04:33.574 killing process with pid 57485 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57485' 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57485 00:04:33.574 17:34:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57485 00:04:35.009 17:34:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57530 00:04:35.009 17:34:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:35.009 17:34:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.292 17:35:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57530 00:04:40.292 17:35:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57530 ']' 00:04:40.292 17:35:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57530 00:04:40.292 17:35:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:40.292 17:35:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.292 17:35:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57530 00:04:40.292 killing process with pid 57530 00:04:40.292 17:35:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.292 17:35:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.292 17:35:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57530' 00:04:40.292 17:35:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57530 00:04:40.292 17:35:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57530 00:04:41.666 17:35:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.666 17:35:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.666 ************************************ 00:04:41.666 END TEST skip_rpc_with_json 00:04:41.666 ************************************ 00:04:41.666 00:04:41.666 real 0m9.175s 00:04:41.666 user 0m8.654s 00:04:41.666 sys 0m0.769s 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.667 17:35:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:41.667 17:35:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.667 17:35:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.667 17:35:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.667 ************************************ 00:04:41.667 START TEST skip_rpc_with_delay 00:04:41.667 ************************************ 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:41.667 17:35:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.667 [2024-11-20 17:35:04.985658] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:41.667 ************************************ 00:04:41.667 END TEST skip_rpc_with_delay 00:04:41.667 ************************************ 00:04:41.667 17:35:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:41.667 17:35:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.667 17:35:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.667 17:35:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.667 00:04:41.667 real 0m0.129s 00:04:41.667 user 0m0.064s 00:04:41.667 sys 0m0.063s 00:04:41.667 17:35:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.667 17:35:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:41.667 17:35:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:41.667 17:35:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:41.667 17:35:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:41.667 17:35:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.667 17:35:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.667 17:35:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.667 ************************************ 00:04:41.667 START TEST exit_on_failed_rpc_init 00:04:41.667 ************************************ 00:04:41.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.667 17:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:41.667 17:35:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57647 00:04:41.667 17:35:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57647 00:04:41.667 17:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57647 ']' 00:04:41.667 17:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.667 17:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.667 17:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.667 17:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.667 17:35:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.667 17:35:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.667 [2024-11-20 17:35:05.187314] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:04:41.667 [2024-11-20 17:35:05.187445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57647 ] 00:04:41.926 [2024-11-20 17:35:05.346170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.926 [2024-11-20 17:35:05.449453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:42.491 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.748 [2024-11-20 17:35:06.093121] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:04:42.748 [2024-11-20 17:35:06.093239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57665 ] 00:04:42.748 [2024-11-20 17:35:06.249300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.006 [2024-11-20 17:35:06.364959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.006 [2024-11-20 17:35:06.365064] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:43.006 [2024-11-20 17:35:06.365079] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:43.006 [2024-11-20 17:35:06.365094] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:43.263 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:43.263 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57647 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57647 ']' 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57647 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57647 00:04:43.264 killing process with pid 57647 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57647' 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57647 00:04:43.264 17:35:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57647 00:04:44.636 ************************************ 00:04:44.636 END TEST exit_on_failed_rpc_init 00:04:44.636 ************************************ 00:04:44.636 00:04:44.636 real 0m2.757s 00:04:44.636 user 0m3.025s 00:04:44.636 sys 0m0.479s 00:04:44.636 17:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.636 17:35:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.636 17:35:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:44.636 00:04:44.636 real 0m19.231s 00:04:44.636 user 0m18.103s 00:04:44.636 sys 0m1.913s 00:04:44.636 ************************************ 00:04:44.636 END TEST skip_rpc 00:04:44.636 ************************************ 00:04:44.636 17:35:07 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.636 17:35:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.636 17:35:07 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:44.636 17:35:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.636 17:35:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.636 17:35:07 -- common/autotest_common.sh@10 -- # set +x 00:04:44.636 ************************************ 00:04:44.636 START TEST rpc_client 00:04:44.636 ************************************ 00:04:44.636 17:35:07 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:44.636 * Looking for test storage... 00:04:44.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:44.636 17:35:08 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.636 17:35:08 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.636 17:35:08 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.636 17:35:08 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.636 17:35:08 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:44.636 17:35:08 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.636 17:35:08 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.636 --rc genhtml_branch_coverage=1 00:04:44.636 --rc genhtml_function_coverage=1 00:04:44.636 --rc genhtml_legend=1 00:04:44.636 --rc geninfo_all_blocks=1 00:04:44.636 --rc geninfo_unexecuted_blocks=1 00:04:44.636 00:04:44.636 ' 00:04:44.636 17:35:08 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.636 --rc genhtml_branch_coverage=1 00:04:44.636 --rc genhtml_function_coverage=1 00:04:44.636 --rc genhtml_legend=1 00:04:44.636 --rc geninfo_all_blocks=1 00:04:44.636 --rc geninfo_unexecuted_blocks=1 00:04:44.636 00:04:44.636 ' 00:04:44.636 17:35:08 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.636 --rc genhtml_branch_coverage=1 00:04:44.636 --rc genhtml_function_coverage=1 00:04:44.636 --rc genhtml_legend=1 00:04:44.636 --rc geninfo_all_blocks=1 00:04:44.636 --rc geninfo_unexecuted_blocks=1 00:04:44.636 00:04:44.636 ' 00:04:44.636 17:35:08 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.636 --rc genhtml_branch_coverage=1 00:04:44.636 --rc genhtml_function_coverage=1 00:04:44.636 --rc genhtml_legend=1 00:04:44.636 --rc geninfo_all_blocks=1 00:04:44.636 --rc geninfo_unexecuted_blocks=1 00:04:44.636 00:04:44.636 ' 00:04:44.636 17:35:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:44.636 OK 00:04:44.636 17:35:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:44.636 00:04:44.636 real 0m0.197s 00:04:44.636 user 0m0.113s 00:04:44.636 sys 0m0.089s 00:04:44.636 ************************************ 00:04:44.636 END TEST rpc_client 00:04:44.636 ************************************ 00:04:44.636 17:35:08 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.636 17:35:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:44.897 17:35:08 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.897 17:35:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.897 17:35:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.897 17:35:08 -- common/autotest_common.sh@10 -- # set +x 00:04:44.897 ************************************ 00:04:44.897 START TEST json_config 00:04:44.897 ************************************ 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.897 17:35:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.897 17:35:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.897 17:35:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.897 17:35:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.897 17:35:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.897 17:35:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.897 17:35:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.897 17:35:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.897 17:35:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.897 17:35:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.897 17:35:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.897 17:35:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:44.897 17:35:08 json_config -- scripts/common.sh@345 -- # : 1 00:04:44.897 17:35:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.897 17:35:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.897 17:35:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:44.897 17:35:08 json_config -- scripts/common.sh@353 -- # local d=1 00:04:44.897 17:35:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.897 17:35:08 json_config -- scripts/common.sh@355 -- # echo 1 00:04:44.897 17:35:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.897 17:35:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:44.897 17:35:08 json_config -- scripts/common.sh@353 -- # local d=2 00:04:44.897 17:35:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.897 17:35:08 json_config -- scripts/common.sh@355 -- # echo 2 00:04:44.897 17:35:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.897 17:35:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.897 17:35:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.897 17:35:08 json_config -- scripts/common.sh@368 -- # return 0 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.897 --rc genhtml_branch_coverage=1 00:04:44.897 --rc genhtml_function_coverage=1 00:04:44.897 --rc genhtml_legend=1 00:04:44.897 --rc geninfo_all_blocks=1 00:04:44.897 --rc geninfo_unexecuted_blocks=1 00:04:44.897 00:04:44.897 ' 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.897 --rc genhtml_branch_coverage=1 00:04:44.897 --rc genhtml_function_coverage=1 00:04:44.897 --rc genhtml_legend=1 00:04:44.897 --rc geninfo_all_blocks=1 00:04:44.897 --rc geninfo_unexecuted_blocks=1 00:04:44.897 00:04:44.897 ' 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.897 --rc genhtml_branch_coverage=1 00:04:44.897 --rc genhtml_function_coverage=1 00:04:44.897 --rc genhtml_legend=1 00:04:44.897 --rc geninfo_all_blocks=1 00:04:44.897 --rc geninfo_unexecuted_blocks=1 00:04:44.897 00:04:44.897 ' 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.897 --rc genhtml_branch_coverage=1 00:04:44.897 --rc genhtml_function_coverage=1 00:04:44.897 --rc genhtml_legend=1 00:04:44.897 --rc geninfo_all_blocks=1 00:04:44.897 --rc geninfo_unexecuted_blocks=1 00:04:44.897 00:04:44.897 ' 00:04:44.897 17:35:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:338f90b6-6028-4f4f-a1c1-7f5cc850a1b2 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=338f90b6-6028-4f4f-a1c1-7f5cc850a1b2 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.897 17:35:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.897 17:35:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.897 17:35:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.897 17:35:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.897 17:35:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.897 17:35:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.897 17:35:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.897 17:35:08 json_config -- paths/export.sh@5 -- # export PATH 00:04:44.897 17:35:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@51 -- # : 0 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.897 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.897 17:35:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.897 17:35:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:44.897 17:35:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:44.897 17:35:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:44.897 17:35:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:44.897 17:35:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.897 WARNING: No tests are enabled so not running JSON configuration tests 00:04:44.897 17:35:08 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:44.897 17:35:08 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:44.897 00:04:44.897 real 0m0.172s 00:04:44.897 user 0m0.112s 00:04:44.897 sys 0m0.053s 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.897 17:35:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.897 ************************************ 00:04:44.897 END TEST json_config 00:04:44.897 ************************************ 00:04:44.898 17:35:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.898 17:35:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.898 17:35:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.898 17:35:08 -- common/autotest_common.sh@10 -- # set +x 00:04:45.159 ************************************ 00:04:45.159 START TEST json_config_extra_key 00:04:45.159 ************************************ 00:04:45.159 17:35:08 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:45.159 17:35:08 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.159 17:35:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.159 17:35:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.159 17:35:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.159 17:35:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:45.159 17:35:08 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.159 17:35:08 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.159 --rc genhtml_branch_coverage=1 00:04:45.159 --rc genhtml_function_coverage=1 00:04:45.159 --rc genhtml_legend=1 00:04:45.159 --rc geninfo_all_blocks=1 00:04:45.159 --rc geninfo_unexecuted_blocks=1 00:04:45.159 00:04:45.159 ' 00:04:45.159 17:35:08 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.159 --rc genhtml_branch_coverage=1 00:04:45.159 --rc genhtml_function_coverage=1 00:04:45.159 --rc genhtml_legend=1 00:04:45.159 --rc geninfo_all_blocks=1 00:04:45.159 --rc geninfo_unexecuted_blocks=1 00:04:45.159 00:04:45.159 ' 00:04:45.159 17:35:08 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.159 --rc genhtml_branch_coverage=1 00:04:45.159 --rc genhtml_function_coverage=1 00:04:45.159 --rc genhtml_legend=1 00:04:45.159 --rc geninfo_all_blocks=1 00:04:45.159 --rc geninfo_unexecuted_blocks=1 00:04:45.159 00:04:45.159 ' 00:04:45.159 17:35:08 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.159 --rc genhtml_branch_coverage=1 00:04:45.159 --rc genhtml_function_coverage=1 00:04:45.159 --rc genhtml_legend=1 00:04:45.159 --rc geninfo_all_blocks=1 00:04:45.159 --rc geninfo_unexecuted_blocks=1 00:04:45.159 00:04:45.159 ' 00:04:45.159 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:338f90b6-6028-4f4f-a1c1-7f5cc850a1b2 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=338f90b6-6028-4f4f-a1c1-7f5cc850a1b2 00:04:45.159 17:35:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:45.160 17:35:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:45.160 17:35:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.160 17:35:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.160 17:35:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.160 17:35:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.160 17:35:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.160 17:35:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.160 17:35:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:45.160 17:35:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.160 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.160 17:35:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:45.160 INFO: launching applications... 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:45.160 17:35:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.160 17:35:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:45.160 17:35:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:45.160 17:35:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.160 17:35:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.160 17:35:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.160 17:35:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.160 17:35:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.160 17:35:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57859 00:04:45.160 Waiting for target to run... 00:04:45.160 17:35:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.160 17:35:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57859 /var/tmp/spdk_tgt.sock 00:04:45.160 17:35:08 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57859 ']' 00:04:45.160 17:35:08 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.160 17:35:08 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.160 17:35:08 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.160 17:35:08 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.160 17:35:08 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.160 17:35:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.160 [2024-11-20 17:35:08.691900] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:04:45.160 [2024-11-20 17:35:08.692068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57859 ] 00:04:45.733 [2024-11-20 17:35:09.085399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.733 [2024-11-20 17:35:09.191643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.303 17:35:09 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.303 17:35:09 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:46.303 00:04:46.303 17:35:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:46.303 INFO: shutting down applications... 00:04:46.303 17:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:46.303 17:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:46.303 17:35:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:46.303 17:35:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:46.303 17:35:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57859 ]] 00:04:46.303 17:35:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57859 00:04:46.303 17:35:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:46.303 17:35:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.303 17:35:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57859 00:04:46.303 17:35:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.898 17:35:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.898 17:35:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.898 17:35:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57859 00:04:46.898 17:35:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.464 17:35:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.464 17:35:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.464 17:35:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57859 00:04:47.464 17:35:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.721 17:35:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.721 17:35:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.721 17:35:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57859 00:04:47.721 17:35:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.285 17:35:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.285 17:35:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.285 17:35:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57859 00:04:48.285 17:35:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.285 17:35:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:48.285 17:35:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.285 SPDK target shutdown done 00:04:48.285 17:35:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.285 Success 00:04:48.285 17:35:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:48.285 00:04:48.285 real 0m3.292s 00:04:48.285 user 0m2.901s 00:04:48.285 sys 0m0.500s 00:04:48.285 17:35:11 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.285 17:35:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.285 ************************************ 00:04:48.285 END TEST json_config_extra_key 00:04:48.285 ************************************ 00:04:48.285 17:35:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.285 17:35:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.285 17:35:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.285 17:35:11 -- common/autotest_common.sh@10 -- # set +x 00:04:48.285 ************************************ 00:04:48.285 START TEST alias_rpc 00:04:48.285 ************************************ 00:04:48.285 17:35:11 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.543 * Looking for test storage... 00:04:48.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.543 17:35:11 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.543 --rc genhtml_branch_coverage=1 00:04:48.543 --rc genhtml_function_coverage=1 00:04:48.543 --rc genhtml_legend=1 00:04:48.543 --rc geninfo_all_blocks=1 00:04:48.543 --rc geninfo_unexecuted_blocks=1 00:04:48.543 00:04:48.543 ' 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.543 --rc genhtml_branch_coverage=1 00:04:48.543 --rc genhtml_function_coverage=1 00:04:48.543 --rc genhtml_legend=1 00:04:48.543 --rc geninfo_all_blocks=1 00:04:48.543 --rc geninfo_unexecuted_blocks=1 00:04:48.543 00:04:48.543 ' 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.543 --rc genhtml_branch_coverage=1 00:04:48.543 --rc genhtml_function_coverage=1 00:04:48.543 --rc genhtml_legend=1 00:04:48.543 --rc geninfo_all_blocks=1 00:04:48.543 --rc geninfo_unexecuted_blocks=1 00:04:48.543 00:04:48.543 ' 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.543 --rc genhtml_branch_coverage=1 00:04:48.543 --rc genhtml_function_coverage=1 00:04:48.543 --rc genhtml_legend=1 00:04:48.543 --rc geninfo_all_blocks=1 00:04:48.543 --rc geninfo_unexecuted_blocks=1 00:04:48.543 00:04:48.543 ' 00:04:48.543 17:35:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.543 17:35:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57957 00:04:48.543 17:35:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57957 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57957 ']' 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.543 17:35:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.543 17:35:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.543 [2024-11-20 17:35:11.982772] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:04:48.544 [2024-11-20 17:35:11.982894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57957 ] 00:04:48.801 [2024-11-20 17:35:12.143034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.801 [2024-11-20 17:35:12.257410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.734 17:35:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.734 17:35:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:49.734 17:35:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:49.734 17:35:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57957 00:04:49.734 17:35:13 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57957 ']' 00:04:49.734 17:35:13 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57957 00:04:49.734 17:35:13 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:49.734 17:35:13 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.734 17:35:13 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57957 00:04:49.734 17:35:13 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.735 17:35:13 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.735 killing process with pid 57957 00:04:49.735 17:35:13 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57957' 00:04:49.735 17:35:13 alias_rpc -- common/autotest_common.sh@973 -- # kill 57957 00:04:49.735 17:35:13 alias_rpc -- common/autotest_common.sh@978 -- # wait 57957 00:04:51.631 00:04:51.631 real 0m2.959s 00:04:51.631 user 0m2.991s 00:04:51.631 sys 0m0.467s 00:04:51.631 ************************************ 00:04:51.631 END TEST alias_rpc 00:04:51.631 17:35:14 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.631 17:35:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.631 ************************************ 00:04:51.631 17:35:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:51.631 17:35:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:51.631 17:35:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.631 17:35:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.631 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:04:51.631 ************************************ 00:04:51.631 START TEST spdkcli_tcp 00:04:51.631 ************************************ 00:04:51.631 17:35:14 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:51.631 * Looking for test storage... 00:04:51.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:51.631 17:35:14 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.631 17:35:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.631 17:35:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.631 17:35:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.631 17:35:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:51.631 17:35:14 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.631 17:35:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.631 --rc genhtml_branch_coverage=1 00:04:51.631 --rc genhtml_function_coverage=1 00:04:51.631 --rc genhtml_legend=1 00:04:51.631 --rc geninfo_all_blocks=1 00:04:51.631 --rc geninfo_unexecuted_blocks=1 00:04:51.631 00:04:51.631 ' 00:04:51.631 17:35:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.631 --rc genhtml_branch_coverage=1 00:04:51.631 --rc genhtml_function_coverage=1 00:04:51.631 --rc genhtml_legend=1 00:04:51.631 --rc geninfo_all_blocks=1 00:04:51.631 --rc geninfo_unexecuted_blocks=1 00:04:51.631 00:04:51.631 ' 00:04:51.631 17:35:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.631 --rc genhtml_branch_coverage=1 00:04:51.631 --rc genhtml_function_coverage=1 00:04:51.631 --rc genhtml_legend=1 00:04:51.631 --rc geninfo_all_blocks=1 00:04:51.631 --rc geninfo_unexecuted_blocks=1 00:04:51.631 00:04:51.631 ' 00:04:51.631 17:35:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.631 --rc genhtml_branch_coverage=1 00:04:51.631 --rc genhtml_function_coverage=1 00:04:51.631 --rc genhtml_legend=1 00:04:51.631 --rc geninfo_all_blocks=1 00:04:51.631 --rc geninfo_unexecuted_blocks=1 00:04:51.631 00:04:51.631 ' 00:04:51.631 17:35:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:51.631 17:35:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:51.631 17:35:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:51.631 17:35:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:51.631 17:35:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:51.631 17:35:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:51.631 17:35:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:51.631 17:35:14 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.632 17:35:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.632 17:35:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58052 00:04:51.632 17:35:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58052 00:04:51.632 17:35:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:51.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.632 17:35:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58052 ']' 00:04:51.632 17:35:14 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.632 17:35:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.632 17:35:14 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.632 17:35:14 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.632 17:35:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.632 [2024-11-20 17:35:15.016776] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:04:51.632 [2024-11-20 17:35:15.017100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58052 ] 00:04:51.888 [2024-11-20 17:35:15.178403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.888 [2024-11-20 17:35:15.295583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.888 [2024-11-20 17:35:15.295785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.452 17:35:15 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.452 17:35:15 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:52.452 17:35:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58066 00:04:52.452 17:35:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:52.452 17:35:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:52.710 [ 00:04:52.710 "bdev_malloc_delete", 00:04:52.710 "bdev_malloc_create", 00:04:52.710 "bdev_null_resize", 00:04:52.710 "bdev_null_delete", 00:04:52.710 "bdev_null_create", 00:04:52.710 "bdev_nvme_cuse_unregister", 00:04:52.710 "bdev_nvme_cuse_register", 00:04:52.710 "bdev_opal_new_user", 00:04:52.710 "bdev_opal_set_lock_state", 00:04:52.710 "bdev_opal_delete", 00:04:52.710 "bdev_opal_get_info", 00:04:52.710 "bdev_opal_create", 00:04:52.710 "bdev_nvme_opal_revert", 00:04:52.710 "bdev_nvme_opal_init", 00:04:52.710 "bdev_nvme_send_cmd", 00:04:52.710 "bdev_nvme_set_keys", 00:04:52.710 "bdev_nvme_get_path_iostat", 00:04:52.710 "bdev_nvme_get_mdns_discovery_info", 00:04:52.710 "bdev_nvme_stop_mdns_discovery", 00:04:52.710 "bdev_nvme_start_mdns_discovery", 00:04:52.710 "bdev_nvme_set_multipath_policy", 00:04:52.710 "bdev_nvme_set_preferred_path", 00:04:52.710 "bdev_nvme_get_io_paths", 00:04:52.710 "bdev_nvme_remove_error_injection", 00:04:52.710 "bdev_nvme_add_error_injection", 00:04:52.710 "bdev_nvme_get_discovery_info", 00:04:52.710 "bdev_nvme_stop_discovery", 00:04:52.710 "bdev_nvme_start_discovery", 00:04:52.710 "bdev_nvme_get_controller_health_info", 00:04:52.710 "bdev_nvme_disable_controller", 00:04:52.710 "bdev_nvme_enable_controller", 00:04:52.710 "bdev_nvme_reset_controller", 00:04:52.710 "bdev_nvme_get_transport_statistics", 00:04:52.710 "bdev_nvme_apply_firmware", 00:04:52.710 "bdev_nvme_detach_controller", 00:04:52.710 "bdev_nvme_get_controllers", 00:04:52.710 "bdev_nvme_attach_controller", 00:04:52.710 "bdev_nvme_set_hotplug", 00:04:52.710 "bdev_nvme_set_options", 00:04:52.710 "bdev_passthru_delete", 00:04:52.710 "bdev_passthru_create", 00:04:52.710 "bdev_lvol_set_parent_bdev", 00:04:52.710 "bdev_lvol_set_parent", 00:04:52.710 "bdev_lvol_check_shallow_copy", 00:04:52.710 "bdev_lvol_start_shallow_copy", 00:04:52.710 "bdev_lvol_grow_lvstore", 00:04:52.710 "bdev_lvol_get_lvols", 00:04:52.710 "bdev_lvol_get_lvstores", 00:04:52.710 "bdev_lvol_delete", 00:04:52.710 "bdev_lvol_set_read_only", 00:04:52.710 "bdev_lvol_resize", 00:04:52.710 "bdev_lvol_decouple_parent", 00:04:52.710 "bdev_lvol_inflate", 00:04:52.710 "bdev_lvol_rename", 00:04:52.710 "bdev_lvol_clone_bdev", 00:04:52.710 "bdev_lvol_clone", 00:04:52.710 "bdev_lvol_snapshot", 00:04:52.710 "bdev_lvol_create", 00:04:52.710 "bdev_lvol_delete_lvstore", 00:04:52.710 "bdev_lvol_rename_lvstore", 00:04:52.710 "bdev_lvol_create_lvstore", 00:04:52.710 "bdev_raid_set_options", 00:04:52.710 "bdev_raid_remove_base_bdev", 00:04:52.710 "bdev_raid_add_base_bdev", 00:04:52.710 "bdev_raid_delete", 00:04:52.710 "bdev_raid_create", 00:04:52.710 "bdev_raid_get_bdevs", 00:04:52.710 "bdev_error_inject_error", 00:04:52.710 "bdev_error_delete", 00:04:52.710 "bdev_error_create", 00:04:52.710 "bdev_split_delete", 00:04:52.710 "bdev_split_create", 00:04:52.710 "bdev_delay_delete", 00:04:52.710 "bdev_delay_create", 00:04:52.710 "bdev_delay_update_latency", 00:04:52.710 "bdev_zone_block_delete", 00:04:52.710 "bdev_zone_block_create", 00:04:52.710 "blobfs_create", 00:04:52.710 "blobfs_detect", 00:04:52.710 "blobfs_set_cache_size", 00:04:52.710 "bdev_xnvme_delete", 00:04:52.710 "bdev_xnvme_create", 00:04:52.710 "bdev_aio_delete", 00:04:52.710 "bdev_aio_rescan", 00:04:52.710 "bdev_aio_create", 00:04:52.710 "bdev_ftl_set_property", 00:04:52.710 "bdev_ftl_get_properties", 00:04:52.710 "bdev_ftl_get_stats", 00:04:52.710 "bdev_ftl_unmap", 00:04:52.710 "bdev_ftl_unload", 00:04:52.710 "bdev_ftl_delete", 00:04:52.710 "bdev_ftl_load", 00:04:52.710 "bdev_ftl_create", 00:04:52.710 "bdev_virtio_attach_controller", 00:04:52.710 "bdev_virtio_scsi_get_devices", 00:04:52.710 "bdev_virtio_detach_controller", 00:04:52.710 "bdev_virtio_blk_set_hotplug", 00:04:52.710 "bdev_iscsi_delete", 00:04:52.710 "bdev_iscsi_create", 00:04:52.710 "bdev_iscsi_set_options", 00:04:52.710 "accel_error_inject_error", 00:04:52.710 "ioat_scan_accel_module", 00:04:52.710 "dsa_scan_accel_module", 00:04:52.710 "iaa_scan_accel_module", 00:04:52.710 "keyring_file_remove_key", 00:04:52.710 "keyring_file_add_key", 00:04:52.710 "keyring_linux_set_options", 00:04:52.710 "fsdev_aio_delete", 00:04:52.710 "fsdev_aio_create", 00:04:52.710 "iscsi_get_histogram", 00:04:52.710 "iscsi_enable_histogram", 00:04:52.710 "iscsi_set_options", 00:04:52.710 "iscsi_get_auth_groups", 00:04:52.710 "iscsi_auth_group_remove_secret", 00:04:52.710 "iscsi_auth_group_add_secret", 00:04:52.710 "iscsi_delete_auth_group", 00:04:52.710 "iscsi_create_auth_group", 00:04:52.710 "iscsi_set_discovery_auth", 00:04:52.710 "iscsi_get_options", 00:04:52.710 "iscsi_target_node_request_logout", 00:04:52.710 "iscsi_target_node_set_redirect", 00:04:52.710 "iscsi_target_node_set_auth", 00:04:52.710 "iscsi_target_node_add_lun", 00:04:52.710 "iscsi_get_stats", 00:04:52.710 "iscsi_get_connections", 00:04:52.710 "iscsi_portal_group_set_auth", 00:04:52.710 "iscsi_start_portal_group", 00:04:52.710 "iscsi_delete_portal_group", 00:04:52.710 "iscsi_create_portal_group", 00:04:52.710 "iscsi_get_portal_groups", 00:04:52.710 "iscsi_delete_target_node", 00:04:52.710 "iscsi_target_node_remove_pg_ig_maps", 00:04:52.710 "iscsi_target_node_add_pg_ig_maps", 00:04:52.710 "iscsi_create_target_node", 00:04:52.710 "iscsi_get_target_nodes", 00:04:52.710 "iscsi_delete_initiator_group", 00:04:52.710 "iscsi_initiator_group_remove_initiators", 00:04:52.710 "iscsi_initiator_group_add_initiators", 00:04:52.710 "iscsi_create_initiator_group", 00:04:52.710 "iscsi_get_initiator_groups", 00:04:52.710 "nvmf_set_crdt", 00:04:52.710 "nvmf_set_config", 00:04:52.710 "nvmf_set_max_subsystems", 00:04:52.710 "nvmf_stop_mdns_prr", 00:04:52.710 "nvmf_publish_mdns_prr", 00:04:52.710 "nvmf_subsystem_get_listeners", 00:04:52.710 "nvmf_subsystem_get_qpairs", 00:04:52.710 "nvmf_subsystem_get_controllers", 00:04:52.710 "nvmf_get_stats", 00:04:52.710 "nvmf_get_transports", 00:04:52.710 "nvmf_create_transport", 00:04:52.710 "nvmf_get_targets", 00:04:52.710 "nvmf_delete_target", 00:04:52.710 "nvmf_create_target", 00:04:52.710 "nvmf_subsystem_allow_any_host", 00:04:52.710 "nvmf_subsystem_set_keys", 00:04:52.710 "nvmf_subsystem_remove_host", 00:04:52.710 "nvmf_subsystem_add_host", 00:04:52.710 "nvmf_ns_remove_host", 00:04:52.710 "nvmf_ns_add_host", 00:04:52.711 "nvmf_subsystem_remove_ns", 00:04:52.711 "nvmf_subsystem_set_ns_ana_group", 00:04:52.711 "nvmf_subsystem_add_ns", 00:04:52.711 "nvmf_subsystem_listener_set_ana_state", 00:04:52.711 "nvmf_discovery_get_referrals", 00:04:52.711 "nvmf_discovery_remove_referral", 00:04:52.711 "nvmf_discovery_add_referral", 00:04:52.711 "nvmf_subsystem_remove_listener", 00:04:52.711 "nvmf_subsystem_add_listener", 00:04:52.711 "nvmf_delete_subsystem", 00:04:52.711 "nvmf_create_subsystem", 00:04:52.711 "nvmf_get_subsystems", 00:04:52.711 "env_dpdk_get_mem_stats", 00:04:52.711 "nbd_get_disks", 00:04:52.711 "nbd_stop_disk", 00:04:52.711 "nbd_start_disk", 00:04:52.711 "ublk_recover_disk", 00:04:52.711 "ublk_get_disks", 00:04:52.711 "ublk_stop_disk", 00:04:52.711 "ublk_start_disk", 00:04:52.711 "ublk_destroy_target", 00:04:52.711 "ublk_create_target", 00:04:52.711 "virtio_blk_create_transport", 00:04:52.711 "virtio_blk_get_transports", 00:04:52.711 "vhost_controller_set_coalescing", 00:04:52.711 "vhost_get_controllers", 00:04:52.711 "vhost_delete_controller", 00:04:52.711 "vhost_create_blk_controller", 00:04:52.711 "vhost_scsi_controller_remove_target", 00:04:52.711 "vhost_scsi_controller_add_target", 00:04:52.711 "vhost_start_scsi_controller", 00:04:52.711 "vhost_create_scsi_controller", 00:04:52.711 "thread_set_cpumask", 00:04:52.711 "scheduler_set_options", 00:04:52.711 "framework_get_governor", 00:04:52.711 "framework_get_scheduler", 00:04:52.711 "framework_set_scheduler", 00:04:52.711 "framework_get_reactors", 00:04:52.711 "thread_get_io_channels", 00:04:52.711 "thread_get_pollers", 00:04:52.711 "thread_get_stats", 00:04:52.711 "framework_monitor_context_switch", 00:04:52.711 "spdk_kill_instance", 00:04:52.711 "log_enable_timestamps", 00:04:52.711 "log_get_flags", 00:04:52.711 "log_clear_flag", 00:04:52.711 "log_set_flag", 00:04:52.711 "log_get_level", 00:04:52.711 "log_set_level", 00:04:52.711 "log_get_print_level", 00:04:52.711 "log_set_print_level", 00:04:52.711 "framework_enable_cpumask_locks", 00:04:52.711 "framework_disable_cpumask_locks", 00:04:52.711 "framework_wait_init", 00:04:52.711 "framework_start_init", 00:04:52.711 "scsi_get_devices", 00:04:52.711 "bdev_get_histogram", 00:04:52.711 "bdev_enable_histogram", 00:04:52.711 "bdev_set_qos_limit", 00:04:52.711 "bdev_set_qd_sampling_period", 00:04:52.711 "bdev_get_bdevs", 00:04:52.711 "bdev_reset_iostat", 00:04:52.711 "bdev_get_iostat", 00:04:52.711 "bdev_examine", 00:04:52.711 "bdev_wait_for_examine", 00:04:52.711 "bdev_set_options", 00:04:52.711 "accel_get_stats", 00:04:52.711 "accel_set_options", 00:04:52.711 "accel_set_driver", 00:04:52.711 "accel_crypto_key_destroy", 00:04:52.711 "accel_crypto_keys_get", 00:04:52.711 "accel_crypto_key_create", 00:04:52.711 "accel_assign_opc", 00:04:52.711 "accel_get_module_info", 00:04:52.711 "accel_get_opc_assignments", 00:04:52.711 "vmd_rescan", 00:04:52.711 "vmd_remove_device", 00:04:52.711 "vmd_enable", 00:04:52.711 "sock_get_default_impl", 00:04:52.711 "sock_set_default_impl", 00:04:52.711 "sock_impl_set_options", 00:04:52.711 "sock_impl_get_options", 00:04:52.711 "iobuf_get_stats", 00:04:52.711 "iobuf_set_options", 00:04:52.711 "keyring_get_keys", 00:04:52.711 "framework_get_pci_devices", 00:04:52.711 "framework_get_config", 00:04:52.711 "framework_get_subsystems", 00:04:52.711 "fsdev_set_opts", 00:04:52.711 "fsdev_get_opts", 00:04:52.711 "trace_get_info", 00:04:52.711 "trace_get_tpoint_group_mask", 00:04:52.711 "trace_disable_tpoint_group", 00:04:52.711 "trace_enable_tpoint_group", 00:04:52.711 "trace_clear_tpoint_mask", 00:04:52.711 "trace_set_tpoint_mask", 00:04:52.711 "notify_get_notifications", 00:04:52.711 "notify_get_types", 00:04:52.711 "spdk_get_version", 00:04:52.711 "rpc_get_methods" 00:04:52.711 ] 00:04:52.711 17:35:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.711 17:35:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:52.711 17:35:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58052 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58052 ']' 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58052 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58052 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58052' 00:04:52.711 killing process with pid 58052 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58052 00:04:52.711 17:35:16 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58052 00:04:54.616 ************************************ 00:04:54.616 END TEST spdkcli_tcp 00:04:54.616 ************************************ 00:04:54.616 00:04:54.616 real 0m3.046s 00:04:54.616 user 0m5.385s 00:04:54.616 sys 0m0.521s 00:04:54.616 17:35:17 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.616 17:35:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.616 17:35:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.616 17:35:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.616 17:35:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.616 17:35:17 -- common/autotest_common.sh@10 -- # set +x 00:04:54.616 ************************************ 00:04:54.616 START TEST dpdk_mem_utility 00:04:54.616 ************************************ 00:04:54.616 17:35:17 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.616 * Looking for test storage... 00:04:54.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:54.616 17:35:17 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.616 17:35:17 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.616 17:35:17 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.616 17:35:17 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:54.616 17:35:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.616 17:35:18 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:54.616 17:35:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:54.616 17:35:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.616 17:35:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:54.616 17:35:18 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.616 17:35:18 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.616 17:35:18 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.616 17:35:18 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:54.616 17:35:18 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.616 17:35:18 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.616 --rc genhtml_branch_coverage=1 00:04:54.616 --rc genhtml_function_coverage=1 00:04:54.616 --rc genhtml_legend=1 00:04:54.616 --rc geninfo_all_blocks=1 00:04:54.616 --rc geninfo_unexecuted_blocks=1 00:04:54.616 00:04:54.616 ' 00:04:54.616 17:35:18 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.616 --rc genhtml_branch_coverage=1 00:04:54.616 --rc genhtml_function_coverage=1 00:04:54.616 --rc genhtml_legend=1 00:04:54.616 --rc geninfo_all_blocks=1 00:04:54.616 --rc geninfo_unexecuted_blocks=1 00:04:54.616 00:04:54.616 ' 00:04:54.616 17:35:18 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.616 --rc genhtml_branch_coverage=1 00:04:54.616 --rc genhtml_function_coverage=1 00:04:54.616 --rc genhtml_legend=1 00:04:54.616 --rc geninfo_all_blocks=1 00:04:54.616 --rc geninfo_unexecuted_blocks=1 00:04:54.616 00:04:54.616 ' 00:04:54.616 17:35:18 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.616 --rc genhtml_branch_coverage=1 00:04:54.616 --rc genhtml_function_coverage=1 00:04:54.616 --rc genhtml_legend=1 00:04:54.616 --rc geninfo_all_blocks=1 00:04:54.617 --rc geninfo_unexecuted_blocks=1 00:04:54.617 00:04:54.617 ' 00:04:54.617 17:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:54.617 17:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58159 00:04:54.617 17:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58159 00:04:54.617 17:35:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.617 17:35:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58159 ']' 00:04:54.617 17:35:18 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.617 17:35:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.617 17:35:18 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.617 17:35:18 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.617 17:35:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.617 [2024-11-20 17:35:18.111043] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:04:54.617 [2024-11-20 17:35:18.111381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58159 ] 00:04:54.875 [2024-11-20 17:35:18.286966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.875 [2024-11-20 17:35:18.400039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.821 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.821 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:55.821 17:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:55.821 17:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:55.821 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.821 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.821 { 00:04:55.821 "filename": "/tmp/spdk_mem_dump.txt" 00:04:55.821 } 00:04:55.821 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.821 17:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:55.821 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:55.821 1 heaps totaling size 824.000000 MiB 00:04:55.821 size: 824.000000 MiB heap id: 0 00:04:55.821 end heaps---------- 00:04:55.821 9 mempools totaling size 603.782043 MiB 00:04:55.821 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:55.821 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:55.821 size: 100.555481 MiB name: bdev_io_58159 00:04:55.821 size: 50.003479 MiB name: msgpool_58159 00:04:55.821 size: 36.509338 MiB name: fsdev_io_58159 00:04:55.821 size: 21.763794 MiB name: PDU_Pool 00:04:55.821 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:55.821 size: 4.133484 MiB name: evtpool_58159 00:04:55.821 size: 0.026123 MiB name: Session_Pool 00:04:55.821 end mempools------- 00:04:55.821 6 memzones totaling size 4.142822 MiB 00:04:55.821 size: 1.000366 MiB name: RG_ring_0_58159 00:04:55.821 size: 1.000366 MiB name: RG_ring_1_58159 00:04:55.821 size: 1.000366 MiB name: RG_ring_4_58159 00:04:55.821 size: 1.000366 MiB name: RG_ring_5_58159 00:04:55.821 size: 0.125366 MiB name: RG_ring_2_58159 00:04:55.821 size: 0.015991 MiB name: RG_ring_3_58159 00:04:55.821 end memzones------- 00:04:55.821 17:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:55.821 heap id: 0 total size: 824.000000 MiB number of busy elements: 328 number of free elements: 18 00:04:55.821 list of free elements. size: 16.778198 MiB 00:04:55.821 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:55.821 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:55.821 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:55.821 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:55.821 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:55.821 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:55.821 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:55.821 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:55.821 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:55.821 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:55.821 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:55.821 element at address: 0x20001b400000 with size: 0.559509 MiB 00:04:55.821 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:55.821 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:55.821 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:55.821 element at address: 0x200012c00000 with size: 0.433472 MiB 00:04:55.821 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:55.821 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:55.821 list of standard malloc elements. size: 199.290894 MiB 00:04:55.821 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:55.821 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:55.821 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:55.821 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:55.821 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:55.821 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:55.821 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:55.821 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:55.821 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:55.821 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:55.821 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:55.821 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:55.821 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:55.821 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:55.821 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:55.821 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:55.821 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:55.821 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:55.821 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:55.822 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:55.823 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:55.823 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:55.823 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:55.824 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:55.824 list of memzone associated elements. size: 607.930908 MiB 00:04:55.824 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:55.824 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:55.824 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:55.824 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:55.824 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:55.824 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58159_0 00:04:55.824 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:55.824 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58159_0 00:04:55.824 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:55.824 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58159_0 00:04:55.824 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:55.824 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:55.824 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:55.824 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:55.824 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:55.824 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58159_0 00:04:55.824 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:55.824 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58159 00:04:55.824 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:55.824 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58159 00:04:55.824 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:55.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:55.824 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:55.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:55.824 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:55.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:55.824 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:55.824 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:55.824 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:55.824 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58159 00:04:55.824 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:55.824 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58159 00:04:55.824 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:55.824 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58159 00:04:55.824 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:55.824 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58159 00:04:55.824 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:55.824 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58159 00:04:55.824 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:55.824 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58159 00:04:55.824 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:55.824 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:55.824 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:55.824 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:55.824 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:55.824 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:55.824 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:55.824 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58159 00:04:55.824 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:55.824 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58159 00:04:55.824 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:55.824 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:55.824 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:55.824 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:55.824 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:55.824 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58159 00:04:55.824 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:55.824 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:55.824 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:55.824 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58159 00:04:55.824 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:55.824 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58159 00:04:55.824 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:55.824 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58159 00:04:55.824 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:55.824 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:55.824 17:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:55.824 17:35:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58159 00:04:55.824 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58159 ']' 00:04:55.824 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58159 00:04:55.824 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:55.824 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.824 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58159 00:04:55.824 killing process with pid 58159 00:04:55.824 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.824 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.826 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58159' 00:04:55.826 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58159 00:04:55.826 17:35:19 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58159 00:04:57.728 ************************************ 00:04:57.728 END TEST dpdk_mem_utility 00:04:57.728 ************************************ 00:04:57.728 00:04:57.728 real 0m2.916s 00:04:57.728 user 0m2.889s 00:04:57.728 sys 0m0.466s 00:04:57.728 17:35:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.729 17:35:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.729 17:35:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.729 17:35:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.729 17:35:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.729 17:35:20 -- common/autotest_common.sh@10 -- # set +x 00:04:57.729 ************************************ 00:04:57.729 START TEST event 00:04:57.729 ************************************ 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.729 * Looking for test storage... 00:04:57.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.729 17:35:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.729 17:35:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.729 17:35:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.729 17:35:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.729 17:35:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.729 17:35:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.729 17:35:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.729 17:35:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.729 17:35:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.729 17:35:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.729 17:35:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.729 17:35:20 event -- scripts/common.sh@344 -- # case "$op" in 00:04:57.729 17:35:20 event -- scripts/common.sh@345 -- # : 1 00:04:57.729 17:35:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.729 17:35:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.729 17:35:20 event -- scripts/common.sh@365 -- # decimal 1 00:04:57.729 17:35:20 event -- scripts/common.sh@353 -- # local d=1 00:04:57.729 17:35:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.729 17:35:20 event -- scripts/common.sh@355 -- # echo 1 00:04:57.729 17:35:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.729 17:35:20 event -- scripts/common.sh@366 -- # decimal 2 00:04:57.729 17:35:20 event -- scripts/common.sh@353 -- # local d=2 00:04:57.729 17:35:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.729 17:35:20 event -- scripts/common.sh@355 -- # echo 2 00:04:57.729 17:35:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.729 17:35:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.729 17:35:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.729 17:35:20 event -- scripts/common.sh@368 -- # return 0 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.729 --rc genhtml_branch_coverage=1 00:04:57.729 --rc genhtml_function_coverage=1 00:04:57.729 --rc genhtml_legend=1 00:04:57.729 --rc geninfo_all_blocks=1 00:04:57.729 --rc geninfo_unexecuted_blocks=1 00:04:57.729 00:04:57.729 ' 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.729 --rc genhtml_branch_coverage=1 00:04:57.729 --rc genhtml_function_coverage=1 00:04:57.729 --rc genhtml_legend=1 00:04:57.729 --rc geninfo_all_blocks=1 00:04:57.729 --rc geninfo_unexecuted_blocks=1 00:04:57.729 00:04:57.729 ' 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.729 --rc genhtml_branch_coverage=1 00:04:57.729 --rc genhtml_function_coverage=1 00:04:57.729 --rc genhtml_legend=1 00:04:57.729 --rc geninfo_all_blocks=1 00:04:57.729 --rc geninfo_unexecuted_blocks=1 00:04:57.729 00:04:57.729 ' 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.729 --rc genhtml_branch_coverage=1 00:04:57.729 --rc genhtml_function_coverage=1 00:04:57.729 --rc genhtml_legend=1 00:04:57.729 --rc geninfo_all_blocks=1 00:04:57.729 --rc geninfo_unexecuted_blocks=1 00:04:57.729 00:04:57.729 ' 00:04:57.729 17:35:20 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:57.729 17:35:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:57.729 17:35:20 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:57.729 17:35:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.729 17:35:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.729 ************************************ 00:04:57.729 START TEST event_perf 00:04:57.729 ************************************ 00:04:57.729 17:35:20 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.729 Running I/O for 1 seconds...[2024-11-20 17:35:20.988376] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:04:57.729 [2024-11-20 17:35:20.988581] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58256 ] 00:04:57.729 [2024-11-20 17:35:21.150245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.987 [2024-11-20 17:35:21.270234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.987 [2024-11-20 17:35:21.270518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.987 [2024-11-20 17:35:21.270512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.987 Running I/O for 1 seconds...[2024-11-20 17:35:21.270446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.923 00:04:58.923 lcore 0: 156710 00:04:58.923 lcore 1: 156710 00:04:58.923 lcore 2: 156711 00:04:58.923 lcore 3: 156707 00:04:58.923 done. 00:04:58.923 ************************************ 00:04:58.923 END TEST event_perf 00:04:58.923 ************************************ 00:04:58.923 00:04:58.923 real 0m1.488s 00:04:58.923 user 0m4.264s 00:04:58.923 sys 0m0.102s 00:04:58.923 17:35:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.923 17:35:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.182 17:35:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.182 17:35:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:59.182 17:35:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.182 17:35:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.182 ************************************ 00:04:59.182 START TEST event_reactor 00:04:59.182 ************************************ 00:04:59.182 17:35:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.182 [2024-11-20 17:35:22.530760] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:04:59.182 [2024-11-20 17:35:22.530857] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58301 ] 00:04:59.182 [2024-11-20 17:35:22.688982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.441 [2024-11-20 17:35:22.807300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.817 test_start 00:05:00.817 oneshot 00:05:00.817 tick 100 00:05:00.817 tick 100 00:05:00.817 tick 250 00:05:00.817 tick 100 00:05:00.817 tick 100 00:05:00.817 tick 100 00:05:00.817 tick 250 00:05:00.817 tick 500 00:05:00.817 tick 100 00:05:00.817 tick 100 00:05:00.817 tick 250 00:05:00.817 tick 100 00:05:00.817 tick 100 00:05:00.817 test_end 00:05:00.817 00:05:00.817 real 0m1.476s 00:05:00.817 user 0m1.291s 00:05:00.817 sys 0m0.075s 00:05:00.817 17:35:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.817 17:35:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:00.817 ************************************ 00:05:00.817 END TEST event_reactor 00:05:00.817 ************************************ 00:05:00.817 17:35:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.817 17:35:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:00.817 17:35:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.817 17:35:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.817 ************************************ 00:05:00.817 START TEST event_reactor_perf 00:05:00.817 ************************************ 00:05:00.817 17:35:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.817 [2024-11-20 17:35:24.055988] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:00.817 [2024-11-20 17:35:24.056108] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58332 ] 00:05:00.817 [2024-11-20 17:35:24.216687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.817 [2024-11-20 17:35:24.338310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.188 test_start 00:05:02.188 test_end 00:05:02.188 Performance: 312976 events per second 00:05:02.188 00:05:02.188 real 0m1.475s 00:05:02.188 user 0m1.294s 00:05:02.188 sys 0m0.072s 00:05:02.188 17:35:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.188 17:35:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.188 ************************************ 00:05:02.188 END TEST event_reactor_perf 00:05:02.188 ************************************ 00:05:02.188 17:35:25 event -- event/event.sh@49 -- # uname -s 00:05:02.188 17:35:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.188 17:35:25 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.188 17:35:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.188 17:35:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.188 17:35:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.188 ************************************ 00:05:02.188 START TEST event_scheduler 00:05:02.188 ************************************ 00:05:02.188 17:35:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.188 * Looking for test storage... 00:05:02.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:02.188 17:35:25 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.188 17:35:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.188 17:35:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.188 17:35:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.188 17:35:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.188 17:35:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.188 17:35:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.188 17:35:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.188 17:35:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.188 17:35:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.188 17:35:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.188 17:35:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.188 17:35:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.188 17:35:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.188 17:35:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.189 17:35:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:02.189 17:35:25 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.189 17:35:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.189 --rc genhtml_branch_coverage=1 00:05:02.189 --rc genhtml_function_coverage=1 00:05:02.189 --rc genhtml_legend=1 00:05:02.189 --rc geninfo_all_blocks=1 00:05:02.189 --rc geninfo_unexecuted_blocks=1 00:05:02.189 00:05:02.189 ' 00:05:02.189 17:35:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.189 --rc genhtml_branch_coverage=1 00:05:02.189 --rc genhtml_function_coverage=1 00:05:02.189 --rc genhtml_legend=1 00:05:02.189 --rc geninfo_all_blocks=1 00:05:02.189 --rc geninfo_unexecuted_blocks=1 00:05:02.189 00:05:02.189 ' 00:05:02.189 17:35:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.189 --rc genhtml_branch_coverage=1 00:05:02.189 --rc genhtml_function_coverage=1 00:05:02.189 --rc genhtml_legend=1 00:05:02.189 --rc geninfo_all_blocks=1 00:05:02.189 --rc geninfo_unexecuted_blocks=1 00:05:02.189 00:05:02.189 ' 00:05:02.189 17:35:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.189 --rc genhtml_branch_coverage=1 00:05:02.189 --rc genhtml_function_coverage=1 00:05:02.189 --rc genhtml_legend=1 00:05:02.189 --rc geninfo_all_blocks=1 00:05:02.189 --rc geninfo_unexecuted_blocks=1 00:05:02.189 00:05:02.189 ' 00:05:02.189 17:35:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:02.189 17:35:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58408 00:05:02.189 17:35:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.189 17:35:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:02.189 17:35:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58408 00:05:02.189 17:35:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58408 ']' 00:05:02.189 17:35:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.189 17:35:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.189 17:35:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.189 17:35:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.189 17:35:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.447 [2024-11-20 17:35:25.758690] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:02.447 [2024-11-20 17:35:25.758976] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58408 ] 00:05:02.447 [2024-11-20 17:35:25.918904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.706 [2024-11-20 17:35:26.024782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.706 [2024-11-20 17:35:26.025030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.706 [2024-11-20 17:35:26.025466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.706 [2024-11-20 17:35:26.025478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.271 17:35:26 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.271 17:35:26 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:03.271 17:35:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:03.271 17:35:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.271 17:35:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.271 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.271 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.271 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.271 POWER: Cannot set governor of lcore 0 to performance 00:05:03.271 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.271 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.271 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.271 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.271 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:03.272 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:03.272 POWER: Unable to set Power Management Environment for lcore 0 00:05:03.272 [2024-11-20 17:35:26.611610] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:03.272 [2024-11-20 17:35:26.611648] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:03.272 [2024-11-20 17:35:26.611671] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:03.272 [2024-11-20 17:35:26.611701] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:03.272 [2024-11-20 17:35:26.611860] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:03.272 [2024-11-20 17:35:26.611900] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:03.272 17:35:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.272 17:35:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:03.272 17:35:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.272 17:35:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 [2024-11-20 17:35:26.844112] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:03.530 17:35:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.530 17:35:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:03.530 17:35:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.530 17:35:26 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.530 17:35:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 ************************************ 00:05:03.530 START TEST scheduler_create_thread 00:05:03.530 ************************************ 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 2 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 3 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 4 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 5 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 6 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 7 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 8 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 9 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 10 00:05:03.530 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.531 17:35:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.919 ************************************ 00:05:04.919 END TEST scheduler_create_thread 00:05:04.919 ************************************ 00:05:04.919 17:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.919 00:05:04.919 real 0m1.170s 00:05:04.919 user 0m0.015s 00:05:04.919 sys 0m0.003s 00:05:04.919 17:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.919 17:35:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.919 17:35:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:04.919 17:35:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58408 00:05:04.919 17:35:28 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58408 ']' 00:05:04.919 17:35:28 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58408 00:05:04.919 17:35:28 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:04.919 17:35:28 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.919 17:35:28 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58408 00:05:04.919 killing process with pid 58408 00:05:04.919 17:35:28 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:04.919 17:35:28 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:04.919 17:35:28 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58408' 00:05:04.919 17:35:28 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58408 00:05:04.919 17:35:28 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58408 00:05:05.177 [2024-11-20 17:35:28.506056] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:05.751 ************************************ 00:05:05.751 END TEST event_scheduler 00:05:05.751 ************************************ 00:05:05.751 00:05:05.751 real 0m3.613s 00:05:05.751 user 0m5.906s 00:05:05.751 sys 0m0.364s 00:05:05.751 17:35:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.751 17:35:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.751 17:35:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:05.751 17:35:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:05.751 17:35:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.751 17:35:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.751 17:35:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.752 ************************************ 00:05:05.752 START TEST app_repeat 00:05:05.752 ************************************ 00:05:05.752 17:35:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:05.752 Process app_repeat pid: 58492 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58492 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58492' 00:05:05.752 spdk_app_start Round 0 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:05.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58492 /var/tmp/spdk-nbd.sock 00:05:05.752 17:35:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58492 ']' 00:05:05.752 17:35:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.752 17:35:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:05.752 17:35:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.752 17:35:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.752 17:35:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.752 17:35:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.752 [2024-11-20 17:35:29.254471] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:05.753 [2024-11-20 17:35:29.255095] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58492 ] 00:05:06.014 [2024-11-20 17:35:29.417683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.014 [2024-11-20 17:35:29.537513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.014 [2024-11-20 17:35:29.537633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.946 17:35:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.946 17:35:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:06.946 17:35:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.946 Malloc0 00:05:06.946 17:35:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.204 Malloc1 00:05:07.204 17:35:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.204 17:35:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.465 /dev/nbd0 00:05:07.465 17:35:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.465 17:35:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.465 1+0 records in 00:05:07.465 1+0 records out 00:05:07.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207662 s, 19.7 MB/s 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:07.465 17:35:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:07.465 17:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.465 17:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.465 17:35:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.724 /dev/nbd1 00:05:07.724 17:35:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.724 17:35:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.724 1+0 records in 00:05:07.724 1+0 records out 00:05:07.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224774 s, 18.2 MB/s 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:07.724 17:35:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:07.724 17:35:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.724 17:35:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.724 17:35:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.724 17:35:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.724 17:35:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.982 { 00:05:07.982 "nbd_device": "/dev/nbd0", 00:05:07.982 "bdev_name": "Malloc0" 00:05:07.982 }, 00:05:07.982 { 00:05:07.982 "nbd_device": "/dev/nbd1", 00:05:07.982 "bdev_name": "Malloc1" 00:05:07.982 } 00:05:07.982 ]' 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.982 { 00:05:07.982 "nbd_device": "/dev/nbd0", 00:05:07.982 "bdev_name": "Malloc0" 00:05:07.982 }, 00:05:07.982 { 00:05:07.982 "nbd_device": "/dev/nbd1", 00:05:07.982 "bdev_name": "Malloc1" 00:05:07.982 } 00:05:07.982 ]' 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.982 /dev/nbd1' 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.982 /dev/nbd1' 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.982 17:35:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.982 256+0 records in 00:05:07.982 256+0 records out 00:05:07.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430141 s, 244 MB/s 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.983 256+0 records in 00:05:07.983 256+0 records out 00:05:07.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225133 s, 46.6 MB/s 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.983 256+0 records in 00:05:07.983 256+0 records out 00:05:07.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221953 s, 47.2 MB/s 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.983 17:35:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.241 17:35:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.241 17:35:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.241 17:35:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.241 17:35:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.241 17:35:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.241 17:35:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.241 17:35:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.241 17:35:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.241 17:35:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.241 17:35:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.499 17:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.499 17:35:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.499 17:35:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.499 17:35:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.499 17:35:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.499 17:35:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.499 17:35:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.499 17:35:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.756 17:35:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.686 [2024-11-20 17:35:33.068480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.686 [2024-11-20 17:35:33.177360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.686 [2024-11-20 17:35:33.177552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.953 [2024-11-20 17:35:33.288434] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.953 [2024-11-20 17:35:33.288699] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.849 17:35:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.849 spdk_app_start Round 1 00:05:11.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.849 17:35:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:11.849 17:35:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58492 /var/tmp/spdk-nbd.sock 00:05:11.849 17:35:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58492 ']' 00:05:11.849 17:35:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.849 17:35:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.849 17:35:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.849 17:35:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.849 17:35:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.107 17:35:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.107 17:35:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:12.107 17:35:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.364 Malloc0 00:05:12.364 17:35:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.621 Malloc1 00:05:12.621 17:35:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.621 17:35:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.877 /dev/nbd0 00:05:12.877 17:35:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.877 17:35:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.877 1+0 records in 00:05:12.877 1+0 records out 00:05:12.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209513 s, 19.6 MB/s 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.877 17:35:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.877 17:35:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.877 17:35:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.877 17:35:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.877 /dev/nbd1 00:05:13.134 17:35:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.134 17:35:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.134 1+0 records in 00:05:13.134 1+0 records out 00:05:13.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284015 s, 14.4 MB/s 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:13.134 17:35:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:13.134 17:35:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.134 17:35:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.134 17:35:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.134 17:35:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.134 17:35:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.134 17:35:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.134 { 00:05:13.134 "nbd_device": "/dev/nbd0", 00:05:13.134 "bdev_name": "Malloc0" 00:05:13.134 }, 00:05:13.134 { 00:05:13.134 "nbd_device": "/dev/nbd1", 00:05:13.134 "bdev_name": "Malloc1" 00:05:13.134 } 00:05:13.134 ]' 00:05:13.134 17:35:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.134 { 00:05:13.134 "nbd_device": "/dev/nbd0", 00:05:13.134 "bdev_name": "Malloc0" 00:05:13.134 }, 00:05:13.134 { 00:05:13.134 "nbd_device": "/dev/nbd1", 00:05:13.134 "bdev_name": "Malloc1" 00:05:13.134 } 00:05:13.134 ]' 00:05:13.134 17:35:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.391 /dev/nbd1' 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.391 /dev/nbd1' 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.391 256+0 records in 00:05:13.391 256+0 records out 00:05:13.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00688678 s, 152 MB/s 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.391 17:35:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.391 256+0 records in 00:05:13.391 256+0 records out 00:05:13.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165226 s, 63.5 MB/s 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.392 256+0 records in 00:05:13.392 256+0 records out 00:05:13.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019559 s, 53.6 MB/s 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.392 17:35:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.649 17:35:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.649 17:35:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.649 17:35:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.649 17:35:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.649 17:35:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.649 17:35:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.649 17:35:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.649 17:35:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.649 17:35:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.649 17:35:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.649 17:35:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.906 17:35:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.906 17:35:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.164 17:35:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.096 [2024-11-20 17:35:38.301232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.096 [2024-11-20 17:35:38.391553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.096 [2024-11-20 17:35:38.391683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.096 [2024-11-20 17:35:38.503119] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.096 [2024-11-20 17:35:38.503184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.694 spdk_app_start Round 2 00:05:17.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.694 17:35:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.694 17:35:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:17.694 17:35:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58492 /var/tmp/spdk-nbd.sock 00:05:17.694 17:35:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58492 ']' 00:05:17.694 17:35:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.694 17:35:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.694 17:35:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.694 17:35:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.694 17:35:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.694 17:35:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.694 17:35:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.694 17:35:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.694 Malloc0 00:05:17.694 17:35:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.953 Malloc1 00:05:17.953 17:35:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.953 17:35:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.212 /dev/nbd0 00:05:18.212 17:35:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.212 17:35:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.212 1+0 records in 00:05:18.212 1+0 records out 00:05:18.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046892 s, 8.7 MB/s 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.212 17:35:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.212 17:35:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.212 17:35:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.212 17:35:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.470 /dev/nbd1 00:05:18.470 17:35:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.470 17:35:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.470 1+0 records in 00:05:18.470 1+0 records out 00:05:18.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292446 s, 14.0 MB/s 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.470 17:35:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.470 17:35:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.470 17:35:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.470 17:35:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.470 17:35:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.470 17:35:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.729 { 00:05:18.729 "nbd_device": "/dev/nbd0", 00:05:18.729 "bdev_name": "Malloc0" 00:05:18.729 }, 00:05:18.729 { 00:05:18.729 "nbd_device": "/dev/nbd1", 00:05:18.729 "bdev_name": "Malloc1" 00:05:18.729 } 00:05:18.729 ]' 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.729 { 00:05:18.729 "nbd_device": "/dev/nbd0", 00:05:18.729 "bdev_name": "Malloc0" 00:05:18.729 }, 00:05:18.729 { 00:05:18.729 "nbd_device": "/dev/nbd1", 00:05:18.729 "bdev_name": "Malloc1" 00:05:18.729 } 00:05:18.729 ]' 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.729 /dev/nbd1' 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.729 /dev/nbd1' 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.729 256+0 records in 00:05:18.729 256+0 records out 00:05:18.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00620446 s, 169 MB/s 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.729 256+0 records in 00:05:18.729 256+0 records out 00:05:18.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130223 s, 80.5 MB/s 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.729 256+0 records in 00:05:18.729 256+0 records out 00:05:18.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018347 s, 57.2 MB/s 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.729 17:35:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.988 17:35:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.988 17:35:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.988 17:35:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.988 17:35:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.988 17:35:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.988 17:35:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.988 17:35:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.988 17:35:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.988 17:35:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.988 17:35:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.247 17:35:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.247 17:35:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.247 17:35:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.247 17:35:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.247 17:35:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.247 17:35:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.247 17:35:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.247 17:35:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.247 17:35:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.247 17:35:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.247 17:35:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.505 17:35:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.505 17:35:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.764 17:35:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.330 [2024-11-20 17:35:43.784789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.589 [2024-11-20 17:35:43.885352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.589 [2024-11-20 17:35:43.885553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.589 [2024-11-20 17:35:44.002055] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.589 [2024-11-20 17:35:44.002137] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.197 17:35:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58492 /var/tmp/spdk-nbd.sock 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58492 ']' 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:23.197 17:35:46 event.app_repeat -- event/event.sh@39 -- # killprocess 58492 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58492 ']' 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58492 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58492 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.197 killing process with pid 58492 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58492' 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58492 00:05:23.197 17:35:46 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58492 00:05:23.456 spdk_app_start is called in Round 0. 00:05:23.456 Shutdown signal received, stop current app iteration 00:05:23.456 Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 reinitialization... 00:05:23.456 spdk_app_start is called in Round 1. 00:05:23.456 Shutdown signal received, stop current app iteration 00:05:23.456 Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 reinitialization... 00:05:23.456 spdk_app_start is called in Round 2. 00:05:23.456 Shutdown signal received, stop current app iteration 00:05:23.456 Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 reinitialization... 00:05:23.456 spdk_app_start is called in Round 3. 00:05:23.456 Shutdown signal received, stop current app iteration 00:05:23.456 17:35:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:23.456 17:35:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:23.456 00:05:23.456 real 0m17.764s 00:05:23.456 user 0m38.526s 00:05:23.456 sys 0m2.230s 00:05:23.456 17:35:46 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.456 ************************************ 00:05:23.456 END TEST app_repeat 00:05:23.456 ************************************ 00:05:23.456 17:35:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.717 17:35:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:23.717 17:35:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:23.717 17:35:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.717 17:35:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.717 17:35:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.717 ************************************ 00:05:23.717 START TEST cpu_locks 00:05:23.717 ************************************ 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:23.717 * Looking for test storage... 00:05:23.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.717 17:35:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.717 --rc genhtml_branch_coverage=1 00:05:23.717 --rc genhtml_function_coverage=1 00:05:23.717 --rc genhtml_legend=1 00:05:23.717 --rc geninfo_all_blocks=1 00:05:23.717 --rc geninfo_unexecuted_blocks=1 00:05:23.717 00:05:23.717 ' 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.717 --rc genhtml_branch_coverage=1 00:05:23.717 --rc genhtml_function_coverage=1 00:05:23.717 --rc genhtml_legend=1 00:05:23.717 --rc geninfo_all_blocks=1 00:05:23.717 --rc geninfo_unexecuted_blocks=1 00:05:23.717 00:05:23.717 ' 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.717 --rc genhtml_branch_coverage=1 00:05:23.717 --rc genhtml_function_coverage=1 00:05:23.717 --rc genhtml_legend=1 00:05:23.717 --rc geninfo_all_blocks=1 00:05:23.717 --rc geninfo_unexecuted_blocks=1 00:05:23.717 00:05:23.717 ' 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.717 --rc genhtml_branch_coverage=1 00:05:23.717 --rc genhtml_function_coverage=1 00:05:23.717 --rc genhtml_legend=1 00:05:23.717 --rc geninfo_all_blocks=1 00:05:23.717 --rc geninfo_unexecuted_blocks=1 00:05:23.717 00:05:23.717 ' 00:05:23.717 17:35:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:23.717 17:35:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:23.717 17:35:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:23.717 17:35:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.717 17:35:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.717 ************************************ 00:05:23.717 START TEST default_locks 00:05:23.717 ************************************ 00:05:23.717 17:35:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:23.717 17:35:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58928 00:05:23.717 17:35:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58928 00:05:23.717 17:35:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58928 ']' 00:05:23.717 17:35:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.717 17:35:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.717 17:35:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.717 17:35:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.717 17:35:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.718 17:35:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.718 [2024-11-20 17:35:47.247243] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:23.718 [2024-11-20 17:35:47.247365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58928 ] 00:05:23.976 [2024-11-20 17:35:47.404659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.976 [2024-11-20 17:35:47.499109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.546 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.546 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:24.546 17:35:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58928 00:05:24.546 17:35:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.546 17:35:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58928 00:05:24.803 17:35:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58928 00:05:24.803 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58928 ']' 00:05:24.803 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58928 00:05:24.803 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:24.803 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.803 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58928 00:05:24.803 killing process with pid 58928 00:05:24.803 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.803 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.803 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58928' 00:05:24.803 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58928 00:05:24.803 17:35:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58928 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58928 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58928 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58928 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58928 ']' 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.179 ERROR: process (pid: 58928) is no longer running 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.179 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58928) - No such process 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:26.179 ************************************ 00:05:26.179 END TEST default_locks 00:05:26.179 ************************************ 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.179 00:05:26.179 real 0m2.419s 00:05:26.179 user 0m2.369s 00:05:26.179 sys 0m0.484s 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.179 17:35:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.179 17:35:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:26.179 17:35:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.179 17:35:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.179 17:35:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.179 ************************************ 00:05:26.179 START TEST default_locks_via_rpc 00:05:26.179 ************************************ 00:05:26.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.179 17:35:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:26.179 17:35:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58981 00:05:26.179 17:35:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58981 00:05:26.179 17:35:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58981 ']' 00:05:26.179 17:35:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.179 17:35:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.179 17:35:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.179 17:35:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.179 17:35:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.179 17:35:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.179 [2024-11-20 17:35:49.706399] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:26.179 [2024-11-20 17:35:49.706518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58981 ] 00:05:26.494 [2024-11-20 17:35:49.862835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.494 [2024-11-20 17:35:49.982424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58981 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58981 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58981 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58981 ']' 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58981 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58981 00:05:27.435 killing process with pid 58981 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58981' 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58981 00:05:27.435 17:35:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58981 00:05:29.347 ************************************ 00:05:29.347 END TEST default_locks_via_rpc 00:05:29.347 ************************************ 00:05:29.347 00:05:29.347 real 0m2.842s 00:05:29.347 user 0m2.819s 00:05:29.347 sys 0m0.478s 00:05:29.347 17:35:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.347 17:35:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.347 17:35:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:29.347 17:35:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.347 17:35:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.347 17:35:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.347 ************************************ 00:05:29.347 START TEST non_locking_app_on_locked_coremask 00:05:29.347 ************************************ 00:05:29.347 17:35:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:29.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.347 17:35:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59044 00:05:29.347 17:35:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59044 /var/tmp/spdk.sock 00:05:29.347 17:35:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59044 ']' 00:05:29.347 17:35:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.347 17:35:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.347 17:35:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.347 17:35:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.347 17:35:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.347 17:35:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.347 [2024-11-20 17:35:52.595706] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:29.347 [2024-11-20 17:35:52.596027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59044 ] 00:05:29.347 [2024-11-20 17:35:52.753967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.347 [2024-11-20 17:35:52.869190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.290 17:35:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.290 17:35:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.290 17:35:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59060 00:05:30.290 17:35:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59060 /var/tmp/spdk2.sock 00:05:30.290 17:35:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59060 ']' 00:05:30.290 17:35:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:30.290 17:35:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.290 17:35:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.290 17:35:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.290 17:35:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.290 17:35:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.290 [2024-11-20 17:35:53.603258] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:30.290 [2024-11-20 17:35:53.603538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59060 ] 00:05:30.290 [2024-11-20 17:35:53.779744] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.290 [2024-11-20 17:35:53.779814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.549 [2024-11-20 17:35:54.014495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.931 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.931 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.931 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59044 00:05:31.931 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59044 00:05:31.931 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.193 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59044 00:05:32.193 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59044 ']' 00:05:32.193 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59044 00:05:32.193 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.193 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.193 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59044 00:05:32.193 killing process with pid 59044 00:05:32.193 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.193 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.193 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59044' 00:05:32.193 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59044 00:05:32.193 17:35:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59044 00:05:35.499 17:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59060 00:05:35.499 17:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59060 ']' 00:05:35.499 17:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59060 00:05:35.499 17:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:35.500 17:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.500 17:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59060 00:05:35.500 killing process with pid 59060 00:05:35.500 17:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.500 17:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.500 17:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59060' 00:05:35.500 17:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59060 00:05:35.500 17:35:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59060 00:05:37.410 00:05:37.410 real 0m8.161s 00:05:37.410 user 0m8.373s 00:05:37.410 sys 0m0.951s 00:05:37.410 17:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.410 17:36:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.410 ************************************ 00:05:37.410 END TEST non_locking_app_on_locked_coremask 00:05:37.410 ************************************ 00:05:37.410 17:36:00 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:37.410 17:36:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.411 17:36:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.411 17:36:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.411 ************************************ 00:05:37.411 START TEST locking_app_on_unlocked_coremask 00:05:37.411 ************************************ 00:05:37.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.411 17:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:37.411 17:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59173 00:05:37.411 17:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59173 /var/tmp/spdk.sock 00:05:37.411 17:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59173 ']' 00:05:37.411 17:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.411 17:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.411 17:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.411 17:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.411 17:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:37.411 17:36:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.411 [2024-11-20 17:36:00.800465] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:37.411 [2024-11-20 17:36:00.800783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:05:37.670 [2024-11-20 17:36:00.959453] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.670 [2024-11-20 17:36:00.959533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.670 [2024-11-20 17:36:01.077286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.238 17:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.238 17:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.238 17:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59189 00:05:38.238 17:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59189 /var/tmp/spdk2.sock 00:05:38.238 17:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59189 ']' 00:05:38.238 17:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.238 17:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.238 17:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.238 17:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.238 17:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.238 17:36:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.503 [2024-11-20 17:36:01.794310] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:38.503 [2024-11-20 17:36:01.794409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59189 ] 00:05:38.503 [2024-11-20 17:36:01.967022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.761 [2024-11-20 17:36:02.201587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.142 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.142 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:40.142 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59189 00:05:40.142 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59189 00:05:40.143 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.403 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59173 00:05:40.403 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59173 ']' 00:05:40.403 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59173 00:05:40.403 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.403 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.403 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59173 00:05:40.403 killing process with pid 59173 00:05:40.403 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.403 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.403 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59173' 00:05:40.403 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59173 00:05:40.403 17:36:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59173 00:05:43.704 17:36:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59189 00:05:43.704 17:36:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59189 ']' 00:05:43.704 17:36:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59189 00:05:43.704 17:36:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.704 17:36:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.704 17:36:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59189 00:05:43.704 killing process with pid 59189 00:05:43.704 17:36:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.704 17:36:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.704 17:36:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59189' 00:05:43.704 17:36:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59189 00:05:43.704 17:36:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59189 00:05:45.617 ************************************ 00:05:45.617 END TEST locking_app_on_unlocked_coremask 00:05:45.617 ************************************ 00:05:45.617 00:05:45.617 real 0m7.980s 00:05:45.617 user 0m8.078s 00:05:45.617 sys 0m0.998s 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.617 17:36:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:45.617 17:36:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.617 17:36:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.617 17:36:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.617 ************************************ 00:05:45.617 START TEST locking_app_on_locked_coremask 00:05:45.617 ************************************ 00:05:45.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59302 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59302 /var/tmp/spdk.sock 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59302 ']' 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.617 17:36:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.617 [2024-11-20 17:36:08.838860] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:45.617 [2024-11-20 17:36:08.839008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59302 ] 00:05:45.617 [2024-11-20 17:36:09.003907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.617 [2024-11-20 17:36:09.122149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59318 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59318 /var/tmp/spdk2.sock 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59318 /var/tmp/spdk2.sock 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59318 /var/tmp/spdk2.sock 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59318 ']' 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.575 17:36:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.575 [2024-11-20 17:36:09.839988] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:46.575 [2024-11-20 17:36:09.840267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59318 ] 00:05:46.575 [2024-11-20 17:36:10.014729] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59302 has claimed it. 00:05:46.575 [2024-11-20 17:36:10.014805] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.148 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59318) - No such process 00:05:47.148 ERROR: process (pid: 59318) is no longer running 00:05:47.148 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.148 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59302 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59302 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59302 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59302 ']' 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59302 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.149 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59302 00:05:47.409 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.409 killing process with pid 59302 00:05:47.409 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.409 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59302' 00:05:47.409 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59302 00:05:47.409 17:36:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59302 00:05:48.796 ************************************ 00:05:48.796 END TEST locking_app_on_locked_coremask 00:05:48.796 ************************************ 00:05:48.796 00:05:48.796 real 0m3.562s 00:05:48.796 user 0m3.689s 00:05:48.796 sys 0m0.645s 00:05:48.796 17:36:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.796 17:36:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.058 17:36:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:49.058 17:36:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.058 17:36:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.058 17:36:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.058 ************************************ 00:05:49.058 START TEST locking_overlapped_coremask 00:05:49.058 ************************************ 00:05:49.058 17:36:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:49.058 17:36:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59371 00:05:49.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.058 17:36:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59371 /var/tmp/spdk.sock 00:05:49.058 17:36:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:49.058 17:36:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59371 ']' 00:05:49.058 17:36:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.058 17:36:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.058 17:36:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.058 17:36:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.058 17:36:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.058 [2024-11-20 17:36:12.475343] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:49.058 [2024-11-20 17:36:12.475759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59371 ] 00:05:49.319 [2024-11-20 17:36:12.653382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.319 [2024-11-20 17:36:12.777017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.319 [2024-11-20 17:36:12.777277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.319 [2024-11-20 17:36:12.777399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.891 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.891 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.891 17:36:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:49.891 17:36:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59389 00:05:49.891 17:36:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59389 /var/tmp/spdk2.sock 00:05:49.891 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:49.891 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59389 /var/tmp/spdk2.sock 00:05:49.891 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:50.153 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.153 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:50.153 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.153 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59389 /var/tmp/spdk2.sock 00:05:50.153 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59389 ']' 00:05:50.153 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.153 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.153 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.153 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.153 17:36:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.153 [2024-11-20 17:36:13.497636] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:50.153 [2024-11-20 17:36:13.497957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59389 ] 00:05:50.153 [2024-11-20 17:36:13.678993] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59371 has claimed it. 00:05:50.153 [2024-11-20 17:36:13.679082] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.724 ERROR: process (pid: 59389) is no longer running 00:05:50.724 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59389) - No such process 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59371 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59371 ']' 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59371 00:05:50.724 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.725 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.725 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59371 00:05:50.725 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.725 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.725 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59371' 00:05:50.725 killing process with pid 59371 00:05:50.725 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59371 00:05:50.725 17:36:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59371 00:05:52.642 00:05:52.642 real 0m3.467s 00:05:52.642 user 0m9.174s 00:05:52.642 sys 0m0.553s 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.642 ************************************ 00:05:52.642 END TEST locking_overlapped_coremask 00:05:52.642 ************************************ 00:05:52.642 17:36:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:52.642 17:36:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.642 17:36:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.642 17:36:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.642 ************************************ 00:05:52.642 START TEST locking_overlapped_coremask_via_rpc 00:05:52.642 ************************************ 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59448 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59448 /var/tmp/spdk.sock 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59448 ']' 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.642 17:36:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.642 [2024-11-20 17:36:15.960171] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:52.642 [2024-11-20 17:36:15.960299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59448 ] 00:05:52.642 [2024-11-20 17:36:16.118640] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.642 [2024-11-20 17:36:16.118699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.902 [2024-11-20 17:36:16.223399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.902 [2024-11-20 17:36:16.223639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.902 [2024-11-20 17:36:16.223723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.475 17:36:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.475 17:36:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.475 17:36:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59466 00:05:53.476 17:36:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59466 /var/tmp/spdk2.sock 00:05:53.476 17:36:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59466 ']' 00:05:53.476 17:36:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.476 17:36:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:53.476 17:36:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.476 17:36:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.476 17:36:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.476 17:36:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.476 [2024-11-20 17:36:16.912457] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:53.476 [2024-11-20 17:36:16.912820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59466 ] 00:05:53.737 [2024-11-20 17:36:17.087160] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.737 [2024-11-20 17:36:17.087228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.998 [2024-11-20 17:36:17.301432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.998 [2024-11-20 17:36:17.304983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.998 [2024-11-20 17:36:17.305005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.386 [2024-11-20 17:36:18.503028] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59448 has claimed it. 00:05:55.386 request: 00:05:55.386 { 00:05:55.386 "method": "framework_enable_cpumask_locks", 00:05:55.386 "req_id": 1 00:05:55.386 } 00:05:55.386 Got JSON-RPC error response 00:05:55.386 response: 00:05:55.386 { 00:05:55.386 "code": -32603, 00:05:55.386 "message": "Failed to claim CPU core: 2" 00:05:55.386 } 00:05:55.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59448 /var/tmp/spdk.sock 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59448 ']' 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.386 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.387 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.387 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.387 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59466 /var/tmp/spdk2.sock 00:05:55.387 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59466 ']' 00:05:55.387 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.387 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.387 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.387 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.387 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.649 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.649 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.649 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:55.649 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.649 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.649 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.649 00:05:55.649 real 0m3.059s 00:05:55.649 user 0m1.109s 00:05:55.649 sys 0m0.118s 00:05:55.649 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.649 17:36:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.649 ************************************ 00:05:55.649 END TEST locking_overlapped_coremask_via_rpc 00:05:55.649 ************************************ 00:05:55.649 17:36:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:55.649 17:36:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59448 ]] 00:05:55.649 17:36:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59448 00:05:55.649 17:36:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59448 ']' 00:05:55.649 17:36:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59448 00:05:55.649 17:36:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:55.649 17:36:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.649 17:36:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59448 00:05:55.649 17:36:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.649 17:36:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.649 17:36:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59448' 00:05:55.649 killing process with pid 59448 00:05:55.649 17:36:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59448 00:05:55.649 17:36:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59448 00:05:57.038 17:36:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59466 ]] 00:05:57.038 17:36:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59466 00:05:57.038 17:36:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59466 ']' 00:05:57.038 17:36:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59466 00:05:57.038 17:36:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:57.038 17:36:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.038 17:36:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59466 00:05:57.038 killing process with pid 59466 00:05:57.038 17:36:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:57.038 17:36:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:57.038 17:36:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59466' 00:05:57.038 17:36:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59466 00:05:57.038 17:36:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59466 00:05:58.422 17:36:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.422 Process with pid 59448 is not found 00:05:58.422 Process with pid 59466 is not found 00:05:58.422 17:36:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:58.422 17:36:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59448 ]] 00:05:58.422 17:36:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59448 00:05:58.422 17:36:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59448 ']' 00:05:58.422 17:36:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59448 00:05:58.422 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59448) - No such process 00:05:58.422 17:36:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59448 is not found' 00:05:58.422 17:36:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59466 ]] 00:05:58.422 17:36:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59466 00:05:58.422 17:36:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59466 ']' 00:05:58.422 17:36:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59466 00:05:58.422 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59466) - No such process 00:05:58.422 17:36:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59466 is not found' 00:05:58.422 17:36:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.422 ************************************ 00:05:58.422 END TEST cpu_locks 00:05:58.422 ************************************ 00:05:58.422 00:05:58.422 real 0m34.668s 00:05:58.422 user 0m57.549s 00:05:58.422 sys 0m5.067s 00:05:58.422 17:36:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.422 17:36:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.422 ************************************ 00:05:58.422 END TEST event 00:05:58.422 ************************************ 00:05:58.422 00:05:58.422 real 1m0.896s 00:05:58.422 user 1m48.994s 00:05:58.422 sys 0m8.144s 00:05:58.422 17:36:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.422 17:36:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.422 17:36:21 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.422 17:36:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.422 17:36:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.422 17:36:21 -- common/autotest_common.sh@10 -- # set +x 00:05:58.422 ************************************ 00:05:58.422 START TEST thread 00:05:58.422 ************************************ 00:05:58.422 17:36:21 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.422 * Looking for test storage... 00:05:58.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:58.422 17:36:21 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.423 17:36:21 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.423 17:36:21 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.423 17:36:21 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.423 17:36:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.423 17:36:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.423 17:36:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.423 17:36:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.423 17:36:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.423 17:36:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.423 17:36:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.423 17:36:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.423 17:36:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.423 17:36:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.423 17:36:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.423 17:36:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:58.423 17:36:21 thread -- scripts/common.sh@345 -- # : 1 00:05:58.423 17:36:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.423 17:36:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.423 17:36:21 thread -- scripts/common.sh@365 -- # decimal 1 00:05:58.423 17:36:21 thread -- scripts/common.sh@353 -- # local d=1 00:05:58.423 17:36:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.423 17:36:21 thread -- scripts/common.sh@355 -- # echo 1 00:05:58.423 17:36:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.423 17:36:21 thread -- scripts/common.sh@366 -- # decimal 2 00:05:58.423 17:36:21 thread -- scripts/common.sh@353 -- # local d=2 00:05:58.423 17:36:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.423 17:36:21 thread -- scripts/common.sh@355 -- # echo 2 00:05:58.423 17:36:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.423 17:36:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.423 17:36:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.423 17:36:21 thread -- scripts/common.sh@368 -- # return 0 00:05:58.423 17:36:21 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.423 17:36:21 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.423 --rc genhtml_branch_coverage=1 00:05:58.423 --rc genhtml_function_coverage=1 00:05:58.423 --rc genhtml_legend=1 00:05:58.423 --rc geninfo_all_blocks=1 00:05:58.423 --rc geninfo_unexecuted_blocks=1 00:05:58.423 00:05:58.423 ' 00:05:58.423 17:36:21 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.423 --rc genhtml_branch_coverage=1 00:05:58.423 --rc genhtml_function_coverage=1 00:05:58.423 --rc genhtml_legend=1 00:05:58.423 --rc geninfo_all_blocks=1 00:05:58.423 --rc geninfo_unexecuted_blocks=1 00:05:58.423 00:05:58.423 ' 00:05:58.423 17:36:21 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.423 --rc genhtml_branch_coverage=1 00:05:58.423 --rc genhtml_function_coverage=1 00:05:58.423 --rc genhtml_legend=1 00:05:58.423 --rc geninfo_all_blocks=1 00:05:58.423 --rc geninfo_unexecuted_blocks=1 00:05:58.423 00:05:58.423 ' 00:05:58.423 17:36:21 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.423 --rc genhtml_branch_coverage=1 00:05:58.423 --rc genhtml_function_coverage=1 00:05:58.423 --rc genhtml_legend=1 00:05:58.423 --rc geninfo_all_blocks=1 00:05:58.423 --rc geninfo_unexecuted_blocks=1 00:05:58.423 00:05:58.423 ' 00:05:58.423 17:36:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.423 17:36:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:58.423 17:36:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.423 17:36:21 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.423 ************************************ 00:05:58.423 START TEST thread_poller_perf 00:05:58.423 ************************************ 00:05:58.423 17:36:21 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.423 [2024-11-20 17:36:21.951822] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:05:58.423 [2024-11-20 17:36:21.952186] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59626 ] 00:05:58.683 [2024-11-20 17:36:22.114392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.683 [2024-11-20 17:36:22.214887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.683 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:00.067 [2024-11-20T17:36:23.607Z] ====================================== 00:06:00.067 [2024-11-20T17:36:23.607Z] busy:2616138216 (cyc) 00:06:00.067 [2024-11-20T17:36:23.607Z] total_run_count: 307000 00:06:00.067 [2024-11-20T17:36:23.607Z] tsc_hz: 2600000000 (cyc) 00:06:00.067 [2024-11-20T17:36:23.607Z] ====================================== 00:06:00.067 [2024-11-20T17:36:23.607Z] poller_cost: 8521 (cyc), 3277 (nsec) 00:06:00.067 00:06:00.068 real 0m1.460s 00:06:00.068 user 0m1.277s 00:06:00.068 sys 0m0.074s 00:06:00.068 17:36:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.068 17:36:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.068 ************************************ 00:06:00.068 END TEST thread_poller_perf 00:06:00.068 ************************************ 00:06:00.068 17:36:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.068 17:36:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:00.068 17:36:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.068 17:36:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.068 ************************************ 00:06:00.068 START TEST thread_poller_perf 00:06:00.068 ************************************ 00:06:00.068 17:36:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.068 [2024-11-20 17:36:23.453090] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:00.068 [2024-11-20 17:36:23.453363] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59662 ] 00:06:00.328 [2024-11-20 17:36:23.612397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.328 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:00.328 [2024-11-20 17:36:23.714427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.747 [2024-11-20T17:36:25.287Z] ====================================== 00:06:01.747 [2024-11-20T17:36:25.287Z] busy:2603141084 (cyc) 00:06:01.747 [2024-11-20T17:36:25.287Z] total_run_count: 3969000 00:06:01.747 [2024-11-20T17:36:25.287Z] tsc_hz: 2600000000 (cyc) 00:06:01.747 [2024-11-20T17:36:25.287Z] ====================================== 00:06:01.747 [2024-11-20T17:36:25.287Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:01.747 00:06:01.747 real 0m1.447s 00:06:01.747 user 0m1.264s 00:06:01.747 sys 0m0.076s 00:06:01.747 ************************************ 00:06:01.747 END TEST thread_poller_perf 00:06:01.747 ************************************ 00:06:01.747 17:36:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.747 17:36:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.747 17:36:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:01.747 ************************************ 00:06:01.747 END TEST thread 00:06:01.747 ************************************ 00:06:01.747 00:06:01.747 real 0m3.148s 00:06:01.747 user 0m2.648s 00:06:01.747 sys 0m0.276s 00:06:01.747 17:36:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.747 17:36:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.747 17:36:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:01.747 17:36:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:01.747 17:36:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.747 17:36:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.748 17:36:24 -- common/autotest_common.sh@10 -- # set +x 00:06:01.748 ************************************ 00:06:01.748 START TEST app_cmdline 00:06:01.748 ************************************ 00:06:01.748 17:36:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:01.748 * Looking for test storage... 00:06:01.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.748 17:36:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.748 --rc genhtml_branch_coverage=1 00:06:01.748 --rc genhtml_function_coverage=1 00:06:01.748 --rc genhtml_legend=1 00:06:01.748 --rc geninfo_all_blocks=1 00:06:01.748 --rc geninfo_unexecuted_blocks=1 00:06:01.748 00:06:01.748 ' 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.748 --rc genhtml_branch_coverage=1 00:06:01.748 --rc genhtml_function_coverage=1 00:06:01.748 --rc genhtml_legend=1 00:06:01.748 --rc geninfo_all_blocks=1 00:06:01.748 --rc geninfo_unexecuted_blocks=1 00:06:01.748 00:06:01.748 ' 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.748 --rc genhtml_branch_coverage=1 00:06:01.748 --rc genhtml_function_coverage=1 00:06:01.748 --rc genhtml_legend=1 00:06:01.748 --rc geninfo_all_blocks=1 00:06:01.748 --rc geninfo_unexecuted_blocks=1 00:06:01.748 00:06:01.748 ' 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.748 --rc genhtml_branch_coverage=1 00:06:01.748 --rc genhtml_function_coverage=1 00:06:01.748 --rc genhtml_legend=1 00:06:01.748 --rc geninfo_all_blocks=1 00:06:01.748 --rc geninfo_unexecuted_blocks=1 00:06:01.748 00:06:01.748 ' 00:06:01.748 17:36:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:01.748 17:36:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59746 00:06:01.748 17:36:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59746 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59746 ']' 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.748 17:36:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.748 17:36:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.748 [2024-11-20 17:36:25.155843] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:01.748 [2024-11-20 17:36:25.156155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59746 ] 00:06:02.009 [2024-11-20 17:36:25.308839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.009 [2024-11-20 17:36:25.408799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.578 17:36:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.578 17:36:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:02.578 17:36:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:02.840 { 00:06:02.840 "version": "SPDK v25.01-pre git sha1 5c8d99223", 00:06:02.840 "fields": { 00:06:02.840 "major": 25, 00:06:02.840 "minor": 1, 00:06:02.840 "patch": 0, 00:06:02.840 "suffix": "-pre", 00:06:02.840 "commit": "5c8d99223" 00:06:02.840 } 00:06:02.840 } 00:06:02.840 17:36:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:02.840 17:36:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:02.840 17:36:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:02.840 17:36:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:02.840 17:36:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.840 17:36:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:02.840 17:36:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.840 17:36:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:02.840 17:36:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:02.840 17:36:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:02.840 17:36:26 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.102 request: 00:06:03.102 { 00:06:03.102 "method": "env_dpdk_get_mem_stats", 00:06:03.102 "req_id": 1 00:06:03.102 } 00:06:03.102 Got JSON-RPC error response 00:06:03.102 response: 00:06:03.102 { 00:06:03.102 "code": -32601, 00:06:03.102 "message": "Method not found" 00:06:03.102 } 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.102 17:36:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59746 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59746 ']' 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59746 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59746 00:06:03.102 killing process with pid 59746 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59746' 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 59746 00:06:03.102 17:36:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 59746 00:06:04.485 00:06:04.485 real 0m3.069s 00:06:04.485 user 0m3.363s 00:06:04.485 sys 0m0.418s 00:06:04.485 17:36:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.485 17:36:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.485 ************************************ 00:06:04.485 END TEST app_cmdline 00:06:04.485 ************************************ 00:06:04.745 17:36:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:04.745 17:36:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.745 17:36:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.745 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:06:04.745 ************************************ 00:06:04.745 START TEST version 00:06:04.745 ************************************ 00:06:04.745 17:36:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:04.745 * Looking for test storage... 00:06:04.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:04.745 17:36:28 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.745 17:36:28 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.745 17:36:28 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.745 17:36:28 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.745 17:36:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.745 17:36:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.745 17:36:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.745 17:36:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.745 17:36:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.745 17:36:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.745 17:36:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.745 17:36:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.745 17:36:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.745 17:36:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.745 17:36:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.745 17:36:28 version -- scripts/common.sh@344 -- # case "$op" in 00:06:04.745 17:36:28 version -- scripts/common.sh@345 -- # : 1 00:06:04.745 17:36:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.745 17:36:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.745 17:36:28 version -- scripts/common.sh@365 -- # decimal 1 00:06:04.745 17:36:28 version -- scripts/common.sh@353 -- # local d=1 00:06:04.745 17:36:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.745 17:36:28 version -- scripts/common.sh@355 -- # echo 1 00:06:04.745 17:36:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.745 17:36:28 version -- scripts/common.sh@366 -- # decimal 2 00:06:04.745 17:36:28 version -- scripts/common.sh@353 -- # local d=2 00:06:04.745 17:36:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.745 17:36:28 version -- scripts/common.sh@355 -- # echo 2 00:06:04.745 17:36:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.745 17:36:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.745 17:36:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.745 17:36:28 version -- scripts/common.sh@368 -- # return 0 00:06:04.745 17:36:28 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.745 17:36:28 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.745 --rc genhtml_branch_coverage=1 00:06:04.745 --rc genhtml_function_coverage=1 00:06:04.745 --rc genhtml_legend=1 00:06:04.745 --rc geninfo_all_blocks=1 00:06:04.745 --rc geninfo_unexecuted_blocks=1 00:06:04.745 00:06:04.745 ' 00:06:04.745 17:36:28 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.745 --rc genhtml_branch_coverage=1 00:06:04.745 --rc genhtml_function_coverage=1 00:06:04.745 --rc genhtml_legend=1 00:06:04.745 --rc geninfo_all_blocks=1 00:06:04.745 --rc geninfo_unexecuted_blocks=1 00:06:04.745 00:06:04.745 ' 00:06:04.745 17:36:28 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.745 --rc genhtml_branch_coverage=1 00:06:04.745 --rc genhtml_function_coverage=1 00:06:04.745 --rc genhtml_legend=1 00:06:04.745 --rc geninfo_all_blocks=1 00:06:04.745 --rc geninfo_unexecuted_blocks=1 00:06:04.745 00:06:04.745 ' 00:06:04.745 17:36:28 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.746 --rc genhtml_branch_coverage=1 00:06:04.746 --rc genhtml_function_coverage=1 00:06:04.746 --rc genhtml_legend=1 00:06:04.746 --rc geninfo_all_blocks=1 00:06:04.746 --rc geninfo_unexecuted_blocks=1 00:06:04.746 00:06:04.746 ' 00:06:04.746 17:36:28 version -- app/version.sh@17 -- # get_header_version major 00:06:04.746 17:36:28 version -- app/version.sh@14 -- # cut -f2 00:06:04.746 17:36:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.746 17:36:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.746 17:36:28 version -- app/version.sh@17 -- # major=25 00:06:04.746 17:36:28 version -- app/version.sh@18 -- # get_header_version minor 00:06:04.746 17:36:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.746 17:36:28 version -- app/version.sh@14 -- # cut -f2 00:06:04.746 17:36:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.746 17:36:28 version -- app/version.sh@18 -- # minor=1 00:06:04.746 17:36:28 version -- app/version.sh@19 -- # get_header_version patch 00:06:04.746 17:36:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.746 17:36:28 version -- app/version.sh@14 -- # cut -f2 00:06:04.746 17:36:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.746 17:36:28 version -- app/version.sh@19 -- # patch=0 00:06:04.746 17:36:28 version -- app/version.sh@20 -- # get_header_version suffix 00:06:04.746 17:36:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.746 17:36:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.746 17:36:28 version -- app/version.sh@14 -- # cut -f2 00:06:04.746 17:36:28 version -- app/version.sh@20 -- # suffix=-pre 00:06:04.746 17:36:28 version -- app/version.sh@22 -- # version=25.1 00:06:04.746 17:36:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:04.746 17:36:28 version -- app/version.sh@28 -- # version=25.1rc0 00:06:04.746 17:36:28 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:04.746 17:36:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:04.746 17:36:28 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:04.746 17:36:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:04.746 00:06:04.746 real 0m0.210s 00:06:04.746 user 0m0.136s 00:06:04.746 sys 0m0.103s 00:06:04.746 17:36:28 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.746 17:36:28 version -- common/autotest_common.sh@10 -- # set +x 00:06:04.746 ************************************ 00:06:04.746 END TEST version 00:06:04.746 ************************************ 00:06:05.005 17:36:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:05.005 17:36:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:05.005 17:36:28 -- spdk/autotest.sh@194 -- # uname -s 00:06:05.005 17:36:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:05.005 17:36:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:05.005 17:36:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:05.005 17:36:28 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:05.005 17:36:28 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:05.005 17:36:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:05.005 17:36:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.005 17:36:28 -- common/autotest_common.sh@10 -- # set +x 00:06:05.005 ************************************ 00:06:05.005 START TEST blockdev_nvme 00:06:05.005 ************************************ 00:06:05.005 17:36:28 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:05.005 * Looking for test storage... 00:06:05.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:05.005 17:36:28 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.005 17:36:28 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.005 17:36:28 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.005 17:36:28 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.005 17:36:28 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.006 17:36:28 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:05.006 17:36:28 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.006 17:36:28 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.006 --rc genhtml_branch_coverage=1 00:06:05.006 --rc genhtml_function_coverage=1 00:06:05.006 --rc genhtml_legend=1 00:06:05.006 --rc geninfo_all_blocks=1 00:06:05.006 --rc geninfo_unexecuted_blocks=1 00:06:05.006 00:06:05.006 ' 00:06:05.006 17:36:28 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.006 --rc genhtml_branch_coverage=1 00:06:05.006 --rc genhtml_function_coverage=1 00:06:05.006 --rc genhtml_legend=1 00:06:05.006 --rc geninfo_all_blocks=1 00:06:05.006 --rc geninfo_unexecuted_blocks=1 00:06:05.006 00:06:05.006 ' 00:06:05.006 17:36:28 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.006 --rc genhtml_branch_coverage=1 00:06:05.006 --rc genhtml_function_coverage=1 00:06:05.006 --rc genhtml_legend=1 00:06:05.006 --rc geninfo_all_blocks=1 00:06:05.006 --rc geninfo_unexecuted_blocks=1 00:06:05.006 00:06:05.006 ' 00:06:05.006 17:36:28 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.006 --rc genhtml_branch_coverage=1 00:06:05.006 --rc genhtml_function_coverage=1 00:06:05.006 --rc genhtml_legend=1 00:06:05.006 --rc geninfo_all_blocks=1 00:06:05.006 --rc geninfo_unexecuted_blocks=1 00:06:05.006 00:06:05.006 ' 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:05.006 17:36:28 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59923 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59923 00:06:05.006 17:36:28 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59923 ']' 00:06:05.006 17:36:28 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.006 17:36:28 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.006 17:36:28 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.006 17:36:28 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.006 17:36:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.006 17:36:28 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:05.006 [2024-11-20 17:36:28.541603] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:05.006 [2024-11-20 17:36:28.542176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59923 ] 00:06:05.267 [2024-11-20 17:36:28.704321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.528 [2024-11-20 17:36:28.814475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.101 17:36:29 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.101 17:36:29 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:06.101 17:36:29 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:06.101 17:36:29 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:06.101 17:36:29 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:06.101 17:36:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:06.101 17:36:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:06.101 17:36:29 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:06.101 17:36:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.101 17:36:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.362 17:36:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.362 17:36:29 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:06.362 17:36:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.362 17:36:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.362 17:36:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.362 17:36:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:06.362 17:36:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:06.362 17:36:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.362 17:36:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.362 17:36:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.362 17:36:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:06.362 17:36:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.363 17:36:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.363 17:36:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.363 17:36:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:06.363 17:36:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.363 17:36:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.363 17:36:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.363 17:36:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:06.363 17:36:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:06.363 17:36:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:06.363 17:36:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.363 17:36:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.625 17:36:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.625 17:36:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:06.625 17:36:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:06.625 17:36:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fad043a1-df34-4a64-a4de-dc6000a78d64"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fad043a1-df34-4a64-a4de-dc6000a78d64",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ec53f989-fcbe-4e2d-849e-13ebceabc29c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ec53f989-fcbe-4e2d-849e-13ebceabc29c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a37ad6b8-a9c1-497e-97b3-e86d2fc249ce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a37ad6b8-a9c1-497e-97b3-e86d2fc249ce",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2558dcc8-d476-434e-b455-8c2066a9ab11"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2558dcc8-d476-434e-b455-8c2066a9ab11",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "31643980-3e54-4b67-a8da-63cda04d50f3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31643980-3e54-4b67-a8da-63cda04d50f3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b811a285-9d05-45c0-b64b-22c5dab70323"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b811a285-9d05-45c0-b64b-22c5dab70323",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:06.625 17:36:29 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:06.625 17:36:29 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:06.625 17:36:29 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:06.625 17:36:29 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59923 00:06:06.625 17:36:29 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59923 ']' 00:06:06.625 17:36:29 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59923 00:06:06.625 17:36:29 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:06.625 17:36:29 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.625 17:36:29 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59923 00:06:06.625 killing process with pid 59923 00:06:06.625 17:36:29 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.625 17:36:29 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.625 17:36:29 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59923' 00:06:06.625 17:36:29 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59923 00:06:06.626 17:36:29 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59923 00:06:08.536 17:36:31 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:08.536 17:36:31 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:08.536 17:36:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:08.536 17:36:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.536 17:36:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.536 ************************************ 00:06:08.536 START TEST bdev_hello_world 00:06:08.536 ************************************ 00:06:08.536 17:36:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:08.536 [2024-11-20 17:36:31.661024] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:08.536 [2024-11-20 17:36:31.661157] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60007 ] 00:06:08.536 [2024-11-20 17:36:31.823621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.536 [2024-11-20 17:36:31.941166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.139 [2024-11-20 17:36:32.512405] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:09.139 [2024-11-20 17:36:32.512459] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:09.139 [2024-11-20 17:36:32.512485] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:09.139 [2024-11-20 17:36:32.515079] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:09.139 [2024-11-20 17:36:32.516595] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:09.139 [2024-11-20 17:36:32.516635] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:09.139 [2024-11-20 17:36:32.517309] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:09.139 00:06:09.139 [2024-11-20 17:36:32.517346] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:10.075 ************************************ 00:06:10.075 END TEST bdev_hello_world 00:06:10.075 ************************************ 00:06:10.075 00:06:10.075 real 0m1.747s 00:06:10.075 user 0m1.438s 00:06:10.075 sys 0m0.198s 00:06:10.075 17:36:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.075 17:36:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:10.075 17:36:33 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:10.075 17:36:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.075 17:36:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.075 17:36:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.075 ************************************ 00:06:10.075 START TEST bdev_bounds 00:06:10.075 ************************************ 00:06:10.075 17:36:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:10.075 17:36:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60044 00:06:10.075 Process bdevio pid: 60044 00:06:10.075 17:36:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:10.075 17:36:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.075 17:36:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60044' 00:06:10.076 17:36:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60044 00:06:10.076 17:36:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60044 ']' 00:06:10.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.076 17:36:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.076 17:36:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.076 17:36:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.076 17:36:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.076 17:36:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:10.076 [2024-11-20 17:36:33.465056] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:10.076 [2024-11-20 17:36:33.465379] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60044 ] 00:06:10.335 [2024-11-20 17:36:33.628390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.335 [2024-11-20 17:36:33.753457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.335 [2024-11-20 17:36:33.753782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.335 [2024-11-20 17:36:33.753894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.905 17:36:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.905 17:36:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:10.905 17:36:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:11.165 I/O targets: 00:06:11.165 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:11.165 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:11.165 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:11.165 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:11.165 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:11.165 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:11.165 00:06:11.165 00:06:11.165 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.165 http://cunit.sourceforge.net/ 00:06:11.165 00:06:11.165 00:06:11.165 Suite: bdevio tests on: Nvme3n1 00:06:11.165 Test: blockdev write read block ...passed 00:06:11.165 Test: blockdev write zeroes read block ...passed 00:06:11.165 Test: blockdev write zeroes read no split ...passed 00:06:11.165 Test: blockdev write zeroes read split ...passed 00:06:11.165 Test: blockdev write zeroes read split partial ...passed 00:06:11.165 Test: blockdev reset ...[2024-11-20 17:36:34.698975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:11.165 [2024-11-20 17:36:34.702605] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:11.165 passed 00:06:11.165 Test: blockdev write read 8 blocks ...passed 00:06:11.427 Test: blockdev write read size > 128k ...passed 00:06:11.427 Test: blockdev write read invalid size ...passed 00:06:11.427 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.427 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.427 Test: blockdev write read max offset ...passed 00:06:11.427 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.427 Test: blockdev writev readv 8 blocks ...passed 00:06:11.427 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.427 Test: blockdev writev readv block ...passed 00:06:11.427 Test: blockdev writev readv size > 128k ...passed 00:06:11.427 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.427 Test: blockdev comparev and writev ...[2024-11-20 17:36:34.720623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aae0a000 len:0x1000 00:06:11.427 [2024-11-20 17:36:34.720691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.427 passed 00:06:11.427 Test: blockdev nvme passthru rw ...passed 00:06:11.427 Test: blockdev nvme passthru vendor specific ...passed 00:06:11.427 Test: blockdev nvme admin passthru ...[2024-11-20 17:36:34.723514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.427 [2024-11-20 17:36:34.723554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.427 passed 00:06:11.427 Test: blockdev copy ...passed 00:06:11.427 Suite: bdevio tests on: Nvme2n3 00:06:11.427 Test: blockdev write read block ...passed 00:06:11.427 Test: blockdev write zeroes read block ...passed 00:06:11.427 Test: blockdev write zeroes read no split ...passed 00:06:11.427 Test: blockdev write zeroes read split ...passed 00:06:11.427 Test: blockdev write zeroes read split partial ...passed 00:06:11.427 Test: blockdev reset ...[2024-11-20 17:36:34.919367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:11.427 passed 00:06:11.427 Test: blockdev write read 8 blocks ...[2024-11-20 17:36:34.923040] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:11.427 passed 00:06:11.427 Test: blockdev write read size > 128k ...passed 00:06:11.427 Test: blockdev write read invalid size ...passed 00:06:11.427 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.427 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.427 Test: blockdev write read max offset ...passed 00:06:11.427 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.427 Test: blockdev writev readv 8 blocks ...passed 00:06:11.427 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.427 Test: blockdev writev readv block ...passed 00:06:11.427 Test: blockdev writev readv size > 128k ...passed 00:06:11.427 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.427 Test: blockdev comparev and writev ...[2024-11-20 17:36:34.942256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af206000 len:0x1000 00:06:11.427 [2024-11-20 17:36:34.942322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.427 passed 00:06:11.427 Test: blockdev nvme passthru rw ...passed 00:06:11.427 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:36:34.943853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:11.427 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:11.427 [2024-11-20 17:36:34.944020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.427 passed 00:06:11.427 Test: blockdev copy ...passed 00:06:11.427 Suite: bdevio tests on: Nvme2n2 00:06:11.427 Test: blockdev write read block ...passed 00:06:11.427 Test: blockdev write zeroes read block ...passed 00:06:11.427 Test: blockdev write zeroes read no split ...passed 00:06:11.687 Test: blockdev write zeroes read split ...passed 00:06:11.687 Test: blockdev write zeroes read split partial ...passed 00:06:11.687 Test: blockdev reset ...[2024-11-20 17:36:35.010780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:11.687 [2024-11-20 17:36:35.013943] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:11.687 Test: blockdev write read 8 blocks ...uccessful. 00:06:11.687 passed 00:06:11.687 Test: blockdev write read size > 128k ...passed 00:06:11.687 Test: blockdev write read invalid size ...passed 00:06:11.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.687 Test: blockdev write read max offset ...passed 00:06:11.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.687 Test: blockdev writev readv 8 blocks ...passed 00:06:11.687 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.687 Test: blockdev writev readv block ...passed 00:06:11.687 Test: blockdev writev readv size > 128k ...passed 00:06:11.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.687 Test: blockdev comparev and writev ...[2024-11-20 17:36:35.029259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc23c000 len:0x1000 00:06:11.687 [2024-11-20 17:36:35.029483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.687 passed 00:06:11.687 Test: blockdev nvme passthru rw ...passed 00:06:11.687 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:36:35.031675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:11.687 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:11.687 [2024-11-20 17:36:35.031777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.687 passed 00:06:11.687 Test: blockdev copy ...passed 00:06:11.687 Suite: bdevio tests on: Nvme2n1 00:06:11.687 Test: blockdev write read block ...passed 00:06:11.687 Test: blockdev write zeroes read block ...passed 00:06:11.687 Test: blockdev write zeroes read no split ...passed 00:06:11.687 Test: blockdev write zeroes read split ...passed 00:06:11.687 Test: blockdev write zeroes read split partial ...passed 00:06:11.687 Test: blockdev reset ...[2024-11-20 17:36:35.106431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:11.687 [2024-11-20 17:36:35.110574] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:11.687 passed 00:06:11.687 Test: blockdev write read 8 blocks ...passed 00:06:11.687 Test: blockdev write read size > 128k ...passed 00:06:11.687 Test: blockdev write read invalid size ...passed 00:06:11.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.688 Test: blockdev write read max offset ...passed 00:06:11.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.688 Test: blockdev writev readv 8 blocks ...passed 00:06:11.688 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.688 Test: blockdev writev readv block ...passed 00:06:11.688 Test: blockdev writev readv size > 128k ...passed 00:06:11.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.688 Test: blockdev comparev and writev ...passed 00:06:11.688 Test: blockdev nvme passthru rw ...[2024-11-20 17:36:35.128301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc238000 len:0x1000 00:06:11.688 [2024-11-20 17:36:35.128393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.688 passed 00:06:11.688 Test: blockdev nvme passthru vendor specific ...passed 00:06:11.688 Test: blockdev nvme admin passthru ...[2024-11-20 17:36:35.129896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.688 [2024-11-20 17:36:35.129945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.688 passed 00:06:11.688 Test: blockdev copy ...passed 00:06:11.688 Suite: bdevio tests on: Nvme1n1 00:06:11.688 Test: blockdev write read block ...passed 00:06:11.688 Test: blockdev write zeroes read block ...passed 00:06:11.688 Test: blockdev write zeroes read no split ...passed 00:06:11.688 Test: blockdev write zeroes read split ...passed 00:06:11.688 Test: blockdev write zeroes read split partial ...passed 00:06:11.688 Test: blockdev reset ...[2024-11-20 17:36:35.205027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:11.688 [2024-11-20 17:36:35.208346] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:11.688 passed 00:06:11.688 Test: blockdev write read 8 blocks ...passed 00:06:11.688 Test: blockdev write read size > 128k ...passed 00:06:11.688 Test: blockdev write read invalid size ...passed 00:06:11.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.688 Test: blockdev write read max offset ...passed 00:06:11.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.688 Test: blockdev writev readv 8 blocks ...passed 00:06:11.688 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.688 Test: blockdev writev readv block ...passed 00:06:11.688 Test: blockdev writev readv size > 128k ...passed 00:06:11.949 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.949 Test: blockdev comparev and writev ...[2024-11-20 17:36:35.230982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc234000 len:0x1000 00:06:11.949 [2024-11-20 17:36:35.231122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.949 passed 00:06:11.949 Test: blockdev nvme passthru rw ...passed 00:06:11.949 Test: blockdev nvme passthru vendor specific ...passed 00:06:11.949 Test: blockdev nvme admin passthru ...[2024-11-20 17:36:35.234042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.949 [2024-11-20 17:36:35.234102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.949 passed 00:06:11.949 Test: blockdev copy ...passed 00:06:11.949 Suite: bdevio tests on: Nvme0n1 00:06:11.949 Test: blockdev write read block ...passed 00:06:11.949 Test: blockdev write zeroes read block ...passed 00:06:11.949 Test: blockdev write zeroes read no split ...passed 00:06:11.949 Test: blockdev write zeroes read split ...passed 00:06:11.949 Test: blockdev write zeroes read split partial ...passed 00:06:11.949 Test: blockdev reset ...[2024-11-20 17:36:35.307751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:11.949 [2024-11-20 17:36:35.312804] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:11.949 passed 00:06:11.949 Test: blockdev write read 8 blocks ...passed 00:06:11.949 Test: blockdev write read size > 128k ...passed 00:06:11.949 Test: blockdev write read invalid size ...passed 00:06:11.949 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.949 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.949 Test: blockdev write read max offset ...passed 00:06:11.949 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.949 Test: blockdev writev readv 8 blocks ...passed 00:06:11.949 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.949 Test: blockdev writev readv block ...passed 00:06:11.949 Test: blockdev writev readv size > 128k ...passed 00:06:11.949 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.949 Test: blockdev comparev and writev ...passed 00:06:11.949 Test: blockdev nvme passthru rw ...[2024-11-20 17:36:35.329984] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:11.949 separate metadata which is not supported yet. 00:06:11.949 passed 00:06:11.949 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:36:35.331239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:11.949 [2024-11-20 17:36:35.331304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:11.949 passed 00:06:11.949 Test: blockdev nvme admin passthru ...passed 00:06:11.949 Test: blockdev copy ...passed 00:06:11.949 00:06:11.949 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.949 suites 6 6 n/a 0 0 00:06:11.949 tests 138 138 138 0 0 00:06:11.949 asserts 893 893 893 0 n/a 00:06:11.949 00:06:11.949 Elapsed time = 2.010 seconds 00:06:11.949 0 00:06:11.949 17:36:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60044 00:06:11.949 17:36:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60044 ']' 00:06:11.949 17:36:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60044 00:06:11.949 17:36:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:11.949 17:36:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.949 17:36:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60044 00:06:11.949 17:36:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.949 17:36:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.949 17:36:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60044' 00:06:11.949 killing process with pid 60044 00:06:11.949 17:36:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60044 00:06:11.949 17:36:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60044 00:06:13.332 17:36:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:13.332 00:06:13.332 real 0m3.264s 00:06:13.332 user 0m8.307s 00:06:13.332 sys 0m0.343s 00:06:13.332 17:36:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.332 17:36:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:13.332 ************************************ 00:06:13.332 END TEST bdev_bounds 00:06:13.332 ************************************ 00:06:13.332 17:36:36 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:13.332 17:36:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:13.332 17:36:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.332 17:36:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.332 ************************************ 00:06:13.332 START TEST bdev_nbd 00:06:13.332 ************************************ 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60109 00:06:13.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60109 /var/tmp/spdk-nbd.sock 00:06:13.332 17:36:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60109 ']' 00:06:13.333 17:36:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.333 17:36:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.333 17:36:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.333 17:36:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.333 17:36:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:13.333 17:36:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:13.333 [2024-11-20 17:36:36.787701] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:13.333 [2024-11-20 17:36:36.787846] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.596 [2024-11-20 17:36:36.950524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.596 [2024-11-20 17:36:37.054443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.169 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.429 1+0 records in 00:06:14.429 1+0 records out 00:06:14.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000750193 s, 5.5 MB/s 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.429 17:36:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.689 1+0 records in 00:06:14.689 1+0 records out 00:06:14.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559846 s, 7.3 MB/s 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.689 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.950 1+0 records in 00:06:14.950 1+0 records out 00:06:14.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449971 s, 9.1 MB/s 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.950 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.210 1+0 records in 00:06:15.210 1+0 records out 00:06:15.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106204 s, 3.9 MB/s 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.210 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.211 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.211 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.211 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.471 1+0 records in 00:06:15.471 1+0 records out 00:06:15.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734995 s, 5.6 MB/s 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.471 17:36:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.730 1+0 records in 00:06:15.730 1+0 records out 00:06:15.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413139 s, 9.9 MB/s 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.730 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.988 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd0", 00:06:15.988 "bdev_name": "Nvme0n1" 00:06:15.988 }, 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd1", 00:06:15.988 "bdev_name": "Nvme1n1" 00:06:15.988 }, 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd2", 00:06:15.988 "bdev_name": "Nvme2n1" 00:06:15.988 }, 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd3", 00:06:15.988 "bdev_name": "Nvme2n2" 00:06:15.988 }, 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd4", 00:06:15.988 "bdev_name": "Nvme2n3" 00:06:15.988 }, 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd5", 00:06:15.988 "bdev_name": "Nvme3n1" 00:06:15.988 } 00:06:15.988 ]' 00:06:15.988 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:15.988 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd0", 00:06:15.988 "bdev_name": "Nvme0n1" 00:06:15.988 }, 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd1", 00:06:15.988 "bdev_name": "Nvme1n1" 00:06:15.988 }, 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd2", 00:06:15.988 "bdev_name": "Nvme2n1" 00:06:15.988 }, 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd3", 00:06:15.988 "bdev_name": "Nvme2n2" 00:06:15.988 }, 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd4", 00:06:15.988 "bdev_name": "Nvme2n3" 00:06:15.988 }, 00:06:15.988 { 00:06:15.988 "nbd_device": "/dev/nbd5", 00:06:15.988 "bdev_name": "Nvme3n1" 00:06:15.988 } 00:06:15.988 ]' 00:06:15.988 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:15.988 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:15.988 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.988 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:15.988 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.988 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:15.988 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.988 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.248 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.248 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.248 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.248 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.248 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.248 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.248 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.248 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.248 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.248 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.248 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.508 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.508 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.508 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.508 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.508 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.508 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.508 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.508 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.508 17:36:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:16.508 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:16.508 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:16.508 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:16.508 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.508 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.508 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:16.508 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.508 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.508 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.508 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:16.769 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:16.769 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:16.769 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:16.769 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.769 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.769 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:16.769 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.769 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.769 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.769 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:17.028 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:17.028 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:17.028 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:17.028 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.028 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.028 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:17.028 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.028 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.029 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.029 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:17.289 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:17.289 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:17.289 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:17.289 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.289 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.289 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:17.289 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.289 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.289 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.289 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.290 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.550 17:36:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:17.810 /dev/nbd0 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.810 1+0 records in 00:06:17.810 1+0 records out 00:06:17.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00187294 s, 2.2 MB/s 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.810 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:18.071 /dev/nbd1 00:06:18.071 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.071 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.071 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.071 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.071 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.071 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.071 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.071 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.071 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.071 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.072 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.072 1+0 records in 00:06:18.072 1+0 records out 00:06:18.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703371 s, 5.8 MB/s 00:06:18.072 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.072 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.072 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.072 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.072 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.072 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.072 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.072 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:18.331 /dev/nbd10 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.331 1+0 records in 00:06:18.331 1+0 records out 00:06:18.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365156 s, 11.2 MB/s 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.331 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:18.591 /dev/nbd11 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.591 1+0 records in 00:06:18.591 1+0 records out 00:06:18.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102462 s, 4.0 MB/s 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.591 17:36:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:18.902 /dev/nbd12 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.902 1+0 records in 00:06:18.902 1+0 records out 00:06:18.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355801 s, 11.5 MB/s 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:18.902 /dev/nbd13 00:06:18.902 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:18.903 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:18.903 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:18.903 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.903 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.903 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.903 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:18.903 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.903 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.903 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.903 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.162 1+0 records in 00:06:19.162 1+0 records out 00:06:19.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000854695 s, 4.8 MB/s 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd0", 00:06:19.162 "bdev_name": "Nvme0n1" 00:06:19.162 }, 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd1", 00:06:19.162 "bdev_name": "Nvme1n1" 00:06:19.162 }, 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd10", 00:06:19.162 "bdev_name": "Nvme2n1" 00:06:19.162 }, 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd11", 00:06:19.162 "bdev_name": "Nvme2n2" 00:06:19.162 }, 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd12", 00:06:19.162 "bdev_name": "Nvme2n3" 00:06:19.162 }, 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd13", 00:06:19.162 "bdev_name": "Nvme3n1" 00:06:19.162 } 00:06:19.162 ]' 00:06:19.162 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd0", 00:06:19.162 "bdev_name": "Nvme0n1" 00:06:19.162 }, 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd1", 00:06:19.162 "bdev_name": "Nvme1n1" 00:06:19.162 }, 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd10", 00:06:19.162 "bdev_name": "Nvme2n1" 00:06:19.162 }, 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd11", 00:06:19.162 "bdev_name": "Nvme2n2" 00:06:19.162 }, 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd12", 00:06:19.162 "bdev_name": "Nvme2n3" 00:06:19.162 }, 00:06:19.162 { 00:06:19.162 "nbd_device": "/dev/nbd13", 00:06:19.162 "bdev_name": "Nvme3n1" 00:06:19.162 } 00:06:19.162 ]' 00:06:19.163 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.424 /dev/nbd1 00:06:19.424 /dev/nbd10 00:06:19.424 /dev/nbd11 00:06:19.424 /dev/nbd12 00:06:19.424 /dev/nbd13' 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.424 /dev/nbd1 00:06:19.424 /dev/nbd10 00:06:19.424 /dev/nbd11 00:06:19.424 /dev/nbd12 00:06:19.424 /dev/nbd13' 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:19.424 256+0 records in 00:06:19.424 256+0 records out 00:06:19.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00683251 s, 153 MB/s 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.424 256+0 records in 00:06:19.424 256+0 records out 00:06:19.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103067 s, 10.2 MB/s 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.424 256+0 records in 00:06:19.424 256+0 records out 00:06:19.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115623 s, 9.1 MB/s 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.424 17:36:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:19.686 256+0 records in 00:06:19.686 256+0 records out 00:06:19.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103521 s, 10.1 MB/s 00:06:19.686 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.686 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:19.686 256+0 records in 00:06:19.686 256+0 records out 00:06:19.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145845 s, 7.2 MB/s 00:06:19.686 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.686 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:19.946 256+0 records in 00:06:19.946 256+0 records out 00:06:19.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0747161 s, 14.0 MB/s 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:19.947 256+0 records in 00:06:19.947 256+0 records out 00:06:19.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145091 s, 7.2 MB/s 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.947 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.206 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.206 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.206 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.206 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.206 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.206 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.206 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.206 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.206 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.206 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.464 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.464 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.464 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.464 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.464 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.464 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.464 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.464 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.464 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.464 17:36:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:20.723 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:20.723 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:20.723 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:20.723 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.723 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.723 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:20.723 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.723 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.723 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.723 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:20.983 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:20.983 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:20.983 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:20.983 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.983 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.983 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:20.983 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.983 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.983 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.983 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:21.244 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:21.244 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:21.244 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:21.244 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.244 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.244 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:21.244 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.244 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.244 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.244 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:21.504 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:21.504 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:21.504 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:21.504 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.504 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.504 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:21.504 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.504 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.504 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.504 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.504 17:36:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:21.765 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:21.765 malloc_lvol_verify 00:06:22.026 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:22.026 b031a9ef-8459-4946-8638-a7b8a3f57901 00:06:22.026 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:22.287 af921fb0-df5b-4d2b-a5d0-b4faa56ce7ee 00:06:22.287 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:22.547 /dev/nbd0 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:22.547 mke2fs 1.47.0 (5-Feb-2023) 00:06:22.547 Discarding device blocks: 0/4096 done 00:06:22.547 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:22.547 00:06:22.547 Allocating group tables: 0/1 done 00:06:22.547 Writing inode tables: 0/1 done 00:06:22.547 Creating journal (1024 blocks): done 00:06:22.547 Writing superblocks and filesystem accounting information: 0/1 done 00:06:22.547 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.547 17:36:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60109 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60109 ']' 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60109 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.806 17:36:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60109 00:06:22.807 killing process with pid 60109 00:06:22.807 17:36:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.807 17:36:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.807 17:36:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60109' 00:06:22.807 17:36:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60109 00:06:22.807 17:36:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60109 00:06:23.745 17:36:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:23.745 00:06:23.745 real 0m10.293s 00:06:23.745 user 0m14.605s 00:06:23.745 sys 0m3.269s 00:06:23.745 ************************************ 00:06:23.745 END TEST bdev_nbd 00:06:23.745 ************************************ 00:06:23.745 17:36:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.745 17:36:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:23.745 17:36:47 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:23.745 17:36:47 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:23.745 skipping fio tests on NVMe due to multi-ns failures. 00:06:23.745 17:36:47 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:23.745 17:36:47 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:23.745 17:36:47 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.745 17:36:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:23.745 17:36:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.745 17:36:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.745 ************************************ 00:06:23.745 START TEST bdev_verify 00:06:23.745 ************************************ 00:06:23.745 17:36:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.745 [2024-11-20 17:36:47.146285] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:23.745 [2024-11-20 17:36:47.146455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60490 ] 00:06:24.005 [2024-11-20 17:36:47.320288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.005 [2024-11-20 17:36:47.438647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.005 [2024-11-20 17:36:47.438756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.577 Running I/O for 5 seconds... 00:06:26.907 19840.00 IOPS, 77.50 MiB/s [2024-11-20T17:36:51.387Z] 19904.00 IOPS, 77.75 MiB/s [2024-11-20T17:36:52.327Z] 19882.67 IOPS, 77.67 MiB/s [2024-11-20T17:36:53.271Z] 20352.00 IOPS, 79.50 MiB/s [2024-11-20T17:36:53.271Z] 19904.00 IOPS, 77.75 MiB/s 00:06:29.731 Latency(us) 00:06:29.731 [2024-11-20T17:36:53.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:29.731 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0x0 length 0xbd0bd 00:06:29.731 Nvme0n1 : 5.08 1651.36 6.45 0.00 0.00 77188.26 11443.59 161319.38 00:06:29.731 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:29.731 Nvme0n1 : 5.10 1630.92 6.37 0.00 0.00 78263.95 15022.87 101227.91 00:06:29.731 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0x0 length 0xa0000 00:06:29.731 Nvme1n1 : 5.08 1650.81 6.45 0.00 0.00 77028.23 10485.76 150833.62 00:06:29.731 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0xa0000 length 0xa0000 00:06:29.731 Nvme1n1 : 5.10 1630.02 6.37 0.00 0.00 78113.59 17644.31 96791.63 00:06:29.731 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0x0 length 0x80000 00:06:29.731 Nvme2n1 : 5.08 1649.81 6.44 0.00 0.00 76918.77 11494.01 140347.86 00:06:29.731 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0x80000 length 0x80000 00:06:29.731 Nvme2n1 : 5.11 1629.11 6.36 0.00 0.00 77887.83 19459.15 96388.33 00:06:29.731 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0x0 length 0x80000 00:06:29.731 Nvme2n2 : 5.08 1649.36 6.44 0.00 0.00 76786.00 11141.12 134701.69 00:06:29.731 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0x80000 length 0x80000 00:06:29.731 Nvme2n2 : 5.11 1628.25 6.36 0.00 0.00 77771.93 19459.15 98001.53 00:06:29.731 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0x0 length 0x80000 00:06:29.731 Nvme2n3 : 5.10 1658.02 6.48 0.00 0.00 76332.36 8771.74 146800.64 00:06:29.731 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0x80000 length 0x80000 00:06:29.731 Nvme2n3 : 5.11 1626.96 6.36 0.00 0.00 77657.12 17845.96 102034.51 00:06:29.731 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0x0 length 0x20000 00:06:29.731 Nvme3n1 : 5.10 1657.57 6.47 0.00 0.00 76182.42 9175.04 158899.59 00:06:29.731 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.731 Verification LBA range: start 0x20000 length 0x20000 00:06:29.731 Nvme3n1 : 5.12 1626.45 6.35 0.00 0.00 77515.45 15325.34 103244.41 00:06:29.731 [2024-11-20T17:36:53.271Z] =================================================================================================================== 00:06:29.731 [2024-11-20T17:36:53.271Z] Total : 19688.65 76.91 0.00 0.00 77300.33 8771.74 161319.38 00:06:31.117 00:06:31.117 real 0m7.541s 00:06:31.117 user 0m14.001s 00:06:31.117 sys 0m0.280s 00:06:31.117 17:36:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.117 17:36:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:31.117 ************************************ 00:06:31.117 END TEST bdev_verify 00:06:31.117 ************************************ 00:06:31.117 17:36:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.117 17:36:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:31.117 17:36:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.378 17:36:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.378 ************************************ 00:06:31.378 START TEST bdev_verify_big_io 00:06:31.378 ************************************ 00:06:31.378 17:36:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.378 [2024-11-20 17:36:54.749754] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:31.378 [2024-11-20 17:36:54.749960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60600 ] 00:06:31.640 [2024-11-20 17:36:54.917836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.640 [2024-11-20 17:36:55.040988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.640 [2024-11-20 17:36:55.041005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.212 Running I/O for 5 seconds... 00:06:37.306 272.00 IOPS, 17.00 MiB/s [2024-11-20T17:37:01.815Z] 1400.50 IOPS, 87.53 MiB/s [2024-11-20T17:37:02.083Z] 2322.67 IOPS, 145.17 MiB/s 00:06:38.543 Latency(us) 00:06:38.543 [2024-11-20T17:37:02.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.543 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0x0 length 0xbd0b 00:06:38.543 Nvme0n1 : 5.89 103.93 6.50 0.00 0.00 1167765.77 18249.26 1161499.57 00:06:38.543 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:38.543 Nvme0n1 : 5.77 110.96 6.93 0.00 0.00 1113003.87 17140.18 1180857.90 00:06:38.543 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0x0 length 0xa000 00:06:38.543 Nvme1n1 : 5.89 104.24 6.52 0.00 0.00 1129849.84 122602.73 1058255.16 00:06:38.543 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0xa000 length 0xa000 00:06:38.543 Nvme1n1 : 5.77 102.54 6.41 0.00 0.00 1154853.32 112116.97 1832588.21 00:06:38.543 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0x0 length 0x8000 00:06:38.543 Nvme2n1 : 5.89 108.60 6.79 0.00 0.00 1071604.34 115343.36 1006632.96 00:06:38.543 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0x8000 length 0x8000 00:06:38.543 Nvme2n1 : 5.90 104.74 6.55 0.00 0.00 1086391.43 124215.93 1871304.86 00:06:38.543 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0x0 length 0x8000 00:06:38.543 Nvme2n2 : 5.95 111.58 6.97 0.00 0.00 1015571.73 54041.99 1251838.42 00:06:38.543 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0x8000 length 0x8000 00:06:38.543 Nvme2n2 : 5.99 116.33 7.27 0.00 0.00 958103.34 15526.99 1910021.51 00:06:38.543 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0x0 length 0x8000 00:06:38.543 Nvme2n3 : 5.96 108.40 6.77 0.00 0.00 1020489.42 6024.27 2335904.69 00:06:38.543 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0x8000 length 0x8000 00:06:38.543 Nvme2n3 : 5.99 118.79 7.42 0.00 0.00 904147.78 18148.43 1935832.62 00:06:38.543 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0x0 length 0x2000 00:06:38.543 Nvme3n1 : 5.98 123.80 7.74 0.00 0.00 864893.03 8872.57 1193763.45 00:06:38.543 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.543 Verification LBA range: start 0x2000 length 0x2000 00:06:38.543 Nvme3n1 : 6.06 156.99 9.81 0.00 0.00 668108.45 261.51 1974549.27 00:06:38.543 [2024-11-20T17:37:02.083Z] =================================================================================================================== 00:06:38.543 [2024-11-20T17:37:02.083Z] Total : 1370.90 85.68 0.00 0.00 994561.33 261.51 2335904.69 00:06:41.848 00:06:41.848 real 0m10.306s 00:06:41.848 user 0m19.483s 00:06:41.848 sys 0m0.318s 00:06:41.848 17:37:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.848 17:37:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:41.848 ************************************ 00:06:41.848 END TEST bdev_verify_big_io 00:06:41.848 ************************************ 00:06:41.848 17:37:05 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.848 17:37:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:41.848 17:37:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.848 17:37:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.848 ************************************ 00:06:41.848 START TEST bdev_write_zeroes 00:06:41.848 ************************************ 00:06:41.848 17:37:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.848 [2024-11-20 17:37:05.081196] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:41.848 [2024-11-20 17:37:05.081322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60713 ] 00:06:41.848 [2024-11-20 17:37:05.238741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.848 [2024-11-20 17:37:05.354178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.421 Running I/O for 1 seconds... 00:06:44.372 15681.00 IOPS, 61.25 MiB/s 00:06:44.372 Latency(us) 00:06:44.372 [2024-11-20T17:37:07.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.372 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.372 Nvme0n1 : 1.67 1134.71 4.43 0.00 0.00 99304.84 4688.34 877577.45 00:06:44.372 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.372 Nvme1n1 : 1.30 2286.92 8.93 0.00 0.00 55869.51 8318.03 362968.62 00:06:44.372 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.372 Nvme2n1 : 1.30 2215.46 8.65 0.00 0.00 57435.20 8771.74 362968.62 00:06:44.372 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.372 Nvme2n2 : 1.30 2262.60 8.84 0.00 0.00 56128.09 8872.57 362968.62 00:06:44.372 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.372 Nvme2n3 : 1.30 2211.36 8.64 0.00 0.00 57307.10 8771.74 362968.62 00:06:44.372 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:44.372 Nvme3n1 : 1.30 2209.28 8.63 0.00 0.00 57226.04 8771.74 369421.39 00:06:44.372 [2024-11-20T17:37:07.912Z] =================================================================================================================== 00:06:44.372 [2024-11-20T17:37:07.912Z] Total : 12320.32 48.13 0.00 0.00 61683.08 4688.34 877577.45 00:06:45.312 00:06:45.312 real 0m3.729s 00:06:45.312 user 0m3.411s 00:06:45.312 sys 0m0.199s 00:06:45.312 17:37:08 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.312 ************************************ 00:06:45.312 END TEST bdev_write_zeroes 00:06:45.312 ************************************ 00:06:45.312 17:37:08 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:45.312 17:37:08 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.312 17:37:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:45.312 17:37:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.312 17:37:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.312 ************************************ 00:06:45.312 START TEST bdev_json_nonenclosed 00:06:45.312 ************************************ 00:06:45.312 17:37:08 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.312 [2024-11-20 17:37:08.844613] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:45.312 [2024-11-20 17:37:08.844739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60777 ] 00:06:45.573 [2024-11-20 17:37:09.021867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.839 [2024-11-20 17:37:09.141421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.839 [2024-11-20 17:37:09.141522] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:45.839 [2024-11-20 17:37:09.141541] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:45.839 [2024-11-20 17:37:09.141552] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.839 00:06:45.839 real 0m0.547s 00:06:45.839 user 0m0.339s 00:06:45.839 sys 0m0.104s 00:06:45.839 17:37:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.839 17:37:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:45.839 ************************************ 00:06:45.839 END TEST bdev_json_nonenclosed 00:06:45.839 ************************************ 00:06:45.839 17:37:09 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.839 17:37:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:45.839 17:37:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.839 17:37:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.839 ************************************ 00:06:45.839 START TEST bdev_json_nonarray 00:06:45.839 ************************************ 00:06:45.839 17:37:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:46.098 [2024-11-20 17:37:09.430910] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:46.098 [2024-11-20 17:37:09.431026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60803 ] 00:06:46.098 [2024-11-20 17:37:09.589568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.357 [2024-11-20 17:37:09.708128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.357 [2024-11-20 17:37:09.708243] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:46.357 [2024-11-20 17:37:09.708263] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:46.357 [2024-11-20 17:37:09.708273] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.618 00:06:46.618 real 0m0.548s 00:06:46.618 user 0m0.340s 00:06:46.618 sys 0m0.104s 00:06:46.618 17:37:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.618 17:37:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:46.618 ************************************ 00:06:46.618 END TEST bdev_json_nonarray 00:06:46.618 ************************************ 00:06:46.618 17:37:09 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:46.618 17:37:09 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:46.618 17:37:09 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:46.618 17:37:09 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:46.618 17:37:09 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:46.618 17:37:09 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:46.618 17:37:09 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.618 17:37:09 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:46.618 17:37:09 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:46.618 17:37:09 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:46.618 17:37:09 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:46.618 00:06:46.618 real 0m41.641s 00:06:46.618 user 1m5.249s 00:06:46.618 sys 0m5.594s 00:06:46.618 17:37:09 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.618 17:37:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.618 ************************************ 00:06:46.618 END TEST blockdev_nvme 00:06:46.618 ************************************ 00:06:46.618 17:37:09 -- spdk/autotest.sh@209 -- # uname -s 00:06:46.618 17:37:09 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:46.618 17:37:09 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:46.618 17:37:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.618 17:37:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.618 17:37:09 -- common/autotest_common.sh@10 -- # set +x 00:06:46.618 ************************************ 00:06:46.618 START TEST blockdev_nvme_gpt 00:06:46.618 ************************************ 00:06:46.618 17:37:09 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:46.618 * Looking for test storage... 00:06:46.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:46.618 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.618 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.618 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.618 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.618 17:37:10 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:46.618 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.618 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.618 --rc genhtml_branch_coverage=1 00:06:46.618 --rc genhtml_function_coverage=1 00:06:46.618 --rc genhtml_legend=1 00:06:46.618 --rc geninfo_all_blocks=1 00:06:46.618 --rc geninfo_unexecuted_blocks=1 00:06:46.618 00:06:46.618 ' 00:06:46.618 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.618 --rc genhtml_branch_coverage=1 00:06:46.618 --rc genhtml_function_coverage=1 00:06:46.618 --rc genhtml_legend=1 00:06:46.618 --rc geninfo_all_blocks=1 00:06:46.618 --rc geninfo_unexecuted_blocks=1 00:06:46.618 00:06:46.618 ' 00:06:46.618 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.618 --rc genhtml_branch_coverage=1 00:06:46.618 --rc genhtml_function_coverage=1 00:06:46.618 --rc genhtml_legend=1 00:06:46.618 --rc geninfo_all_blocks=1 00:06:46.618 --rc geninfo_unexecuted_blocks=1 00:06:46.618 00:06:46.618 ' 00:06:46.618 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.618 --rc genhtml_branch_coverage=1 00:06:46.618 --rc genhtml_function_coverage=1 00:06:46.618 --rc genhtml_legend=1 00:06:46.618 --rc geninfo_all_blocks=1 00:06:46.618 --rc geninfo_unexecuted_blocks=1 00:06:46.618 00:06:46.618 ' 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:46.618 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60881 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:46.619 17:37:10 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60881 00:06:46.619 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60881 ']' 00:06:46.619 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.619 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.619 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.619 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.880 17:37:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:46.880 [2024-11-20 17:37:10.264287] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:46.880 [2024-11-20 17:37:10.264465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60881 ] 00:06:47.141 [2024-11-20 17:37:10.440561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.141 [2024-11-20 17:37:10.546537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.715 17:37:11 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.715 17:37:11 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:47.715 17:37:11 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:47.716 17:37:11 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:47.716 17:37:11 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:47.977 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:48.239 Waiting for block devices as requested 00:06:48.239 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.239 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.500 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.500 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.794 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:53.794 BYT; 00:06:53.794 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:53.794 BYT; 00:06:53.794 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:53.794 17:37:16 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:53.794 17:37:17 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.794 17:37:17 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.794 17:37:17 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.794 17:37:17 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.794 17:37:17 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.794 17:37:17 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:54.734 The operation has completed successfully. 00:06:54.734 17:37:18 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:55.676 The operation has completed successfully. 00:06:55.676 17:37:19 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:56.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:56.608 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.608 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.608 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.608 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.867 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:56.867 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.867 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.867 [] 00:06:56.867 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.867 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:56.867 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:56.867 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:56.867 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:56.867 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:56.867 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.867 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.127 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.127 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:57.127 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.127 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.127 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.127 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:57.127 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:57.127 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.127 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.127 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.127 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:57.127 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.127 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.127 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.128 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:57.128 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.128 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.128 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.128 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:57.128 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:57.128 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.128 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.128 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:57.128 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.128 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:57.128 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:57.128 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4728405e-df8c-4486-934d-a034f58e36a5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4728405e-df8c-4486-934d-a034f58e36a5",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "37d345c6-6165-4f41-b763-b2dae2cc2229"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "37d345c6-6165-4f41-b763-b2dae2cc2229",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5eefba6c-ccc9-4530-9efd-01c4d3b7424e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5eefba6c-ccc9-4530-9efd-01c4d3b7424e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c5fada3f-d960-43fd-8ef1-17fa0cbe5b39"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c5fada3f-d960-43fd-8ef1-17fa0cbe5b39",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5c0213f0-5798-4be8-9bc0-8f83e1a2a146"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5c0213f0-5798-4be8-9bc0-8f83e1a2a146",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:57.389 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:57.389 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:57.389 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:57.389 17:37:20 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60881 00:06:57.389 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60881 ']' 00:06:57.389 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60881 00:06:57.389 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:57.389 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.389 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60881 00:06:57.389 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.389 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.389 killing process with pid 60881 00:06:57.389 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60881' 00:06:57.389 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60881 00:06:57.389 17:37:20 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60881 00:06:59.299 17:37:22 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:59.299 17:37:22 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.299 17:37:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:59.299 17:37:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.299 17:37:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:59.299 ************************************ 00:06:59.299 START TEST bdev_hello_world 00:06:59.299 ************************************ 00:06:59.299 17:37:22 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.299 [2024-11-20 17:37:22.512418] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:06:59.299 [2024-11-20 17:37:22.512576] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61511 ] 00:06:59.299 [2024-11-20 17:37:22.675768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.299 [2024-11-20 17:37:22.795362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.869 [2024-11-20 17:37:23.373684] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:59.869 [2024-11-20 17:37:23.373751] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:59.869 [2024-11-20 17:37:23.373787] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:59.869 [2024-11-20 17:37:23.376559] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:59.869 [2024-11-20 17:37:23.377269] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:59.869 [2024-11-20 17:37:23.377312] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:59.869 [2024-11-20 17:37:23.377535] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:59.869 00:06:59.869 [2024-11-20 17:37:23.377572] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:00.809 00:07:00.809 real 0m1.807s 00:07:00.809 user 0m1.478s 00:07:00.809 sys 0m0.216s 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:00.809 ************************************ 00:07:00.809 END TEST bdev_hello_world 00:07:00.809 ************************************ 00:07:00.809 17:37:24 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:00.809 17:37:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.809 17:37:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.809 17:37:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:00.809 ************************************ 00:07:00.809 START TEST bdev_bounds 00:07:00.809 ************************************ 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:00.809 Process bdevio pid: 61553 00:07:00.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61553 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61553' 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61553 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61553 ']' 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.809 17:37:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:01.069 [2024-11-20 17:37:24.353223] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:01.069 [2024-11-20 17:37:24.353357] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61553 ] 00:07:01.069 [2024-11-20 17:37:24.516544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.329 [2024-11-20 17:37:24.654245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.329 [2024-11-20 17:37:24.654580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.329 [2024-11-20 17:37:24.654752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.897 17:37:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.897 17:37:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:01.897 17:37:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:01.897 I/O targets: 00:07:01.897 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:01.897 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:01.897 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:01.897 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.897 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.897 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.897 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:01.897 00:07:01.897 00:07:01.897 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.897 http://cunit.sourceforge.net/ 00:07:01.897 00:07:01.897 00:07:01.897 Suite: bdevio tests on: Nvme3n1 00:07:01.897 Test: blockdev write read block ...passed 00:07:01.897 Test: blockdev write zeroes read block ...passed 00:07:02.157 Test: blockdev write zeroes read no split ...passed 00:07:02.157 Test: blockdev write zeroes read split ...passed 00:07:02.157 Test: blockdev write zeroes read split partial ...passed 00:07:02.157 Test: blockdev reset ...[2024-11-20 17:37:25.482125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:02.157 [2024-11-20 17:37:25.486017] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:02.157 passed 00:07:02.157 Test: blockdev write read 8 blocks ...passed 00:07:02.157 Test: blockdev write read size > 128k ...passed 00:07:02.157 Test: blockdev write read invalid size ...passed 00:07:02.157 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.157 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.157 Test: blockdev write read max offset ...passed 00:07:02.157 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.157 Test: blockdev writev readv 8 blocks ...passed 00:07:02.157 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.157 Test: blockdev writev readv block ...passed 00:07:02.157 Test: blockdev writev readv size > 128k ...passed 00:07:02.157 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.157 Test: blockdev comparev and writev ...[2024-11-20 17:37:25.494406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8804000 len:0x1000 00:07:02.157 [2024-11-20 17:37:25.494466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.157 passed 00:07:02.157 Test: blockdev nvme passthru rw ...passed 00:07:02.157 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:37:25.495620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:02.157 [2024-11-20 17:37:25.495731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:02.157 passed 00:07:02.157 Test: blockdev nvme admin passthru ...passed 00:07:02.157 Test: blockdev copy ...passed 00:07:02.157 Suite: bdevio tests on: Nvme2n3 00:07:02.157 Test: blockdev write read block ...passed 00:07:02.157 Test: blockdev write zeroes read block ...passed 00:07:02.157 Test: blockdev write zeroes read no split ...passed 00:07:02.157 Test: blockdev write zeroes read split ...passed 00:07:02.157 Test: blockdev write zeroes read split partial ...passed 00:07:02.157 Test: blockdev reset ...[2024-11-20 17:37:25.557550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:02.157 [2024-11-20 17:37:25.564325] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:02.157 passed 00:07:02.157 Test: blockdev write read 8 blocks ...passed 00:07:02.157 Test: blockdev write read size > 128k ...passed 00:07:02.157 Test: blockdev write read invalid size ...passed 00:07:02.157 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.157 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.157 Test: blockdev write read max offset ...passed 00:07:02.157 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.157 Test: blockdev writev readv 8 blocks ...passed 00:07:02.157 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.157 Test: blockdev writev readv block ...passed 00:07:02.157 Test: blockdev writev readv size > 128k ...passed 00:07:02.157 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.157 Test: blockdev comparev and writev ...[2024-11-20 17:37:25.577303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8802000 len:0x1000 00:07:02.157 [2024-11-20 17:37:25.577386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.157 passed 00:07:02.157 Test: blockdev nvme passthru rw ...passed 00:07:02.157 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.157 Test: blockdev nvme admin passthru ...[2024-11-20 17:37:25.578849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:02.157 [2024-11-20 17:37:25.578960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:02.157 passed 00:07:02.157 Test: blockdev copy ...passed 00:07:02.157 Suite: bdevio tests on: Nvme2n2 00:07:02.157 Test: blockdev write read block ...passed 00:07:02.157 Test: blockdev write zeroes read block ...passed 00:07:02.157 Test: blockdev write zeroes read no split ...passed 00:07:02.157 Test: blockdev write zeroes read split ...passed 00:07:02.157 Test: blockdev write zeroes read split partial ...passed 00:07:02.157 Test: blockdev reset ...[2024-11-20 17:37:25.657268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:02.157 [2024-11-20 17:37:25.661905] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:02.157 passed 00:07:02.157 Test: blockdev write read 8 blocks ...passed 00:07:02.157 Test: blockdev write read size > 128k ...passed 00:07:02.157 Test: blockdev write read invalid size ...passed 00:07:02.157 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.157 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.157 Test: blockdev write read max offset ...passed 00:07:02.157 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.157 Test: blockdev writev readv 8 blocks ...passed 00:07:02.157 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.157 Test: blockdev writev readv block ...passed 00:07:02.157 Test: blockdev writev readv size > 128k ...passed 00:07:02.157 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.157 Test: blockdev comparev and writev ...[2024-11-20 17:37:25.670707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd838000 len:0x1000 00:07:02.157 [2024-11-20 17:37:25.670759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.157 passed 00:07:02.157 Test: blockdev nvme passthru rw ...passed 00:07:02.157 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:37:25.671657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:02.157 [2024-11-20 17:37:25.671682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:02.157 passed 00:07:02.157 Test: blockdev nvme admin passthru ...passed 00:07:02.157 Test: blockdev copy ...passed 00:07:02.157 Suite: bdevio tests on: Nvme2n1 00:07:02.157 Test: blockdev write read block ...passed 00:07:02.157 Test: blockdev write zeroes read block ...passed 00:07:02.157 Test: blockdev write zeroes read no split ...passed 00:07:02.416 Test: blockdev write zeroes read split ...passed 00:07:02.416 Test: blockdev write zeroes read split partial ...passed 00:07:02.416 Test: blockdev reset ...[2024-11-20 17:37:25.746218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:02.416 [2024-11-20 17:37:25.752093] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:02.416 passed 00:07:02.416 Test: blockdev write read 8 blocks ...passed 00:07:02.416 Test: blockdev write read size > 128k ...passed 00:07:02.416 Test: blockdev write read invalid size ...passed 00:07:02.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.416 Test: blockdev write read max offset ...passed 00:07:02.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.416 Test: blockdev writev readv 8 blocks ...passed 00:07:02.416 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.416 Test: blockdev writev readv block ...passed 00:07:02.416 Test: blockdev writev readv size > 128k ...passed 00:07:02.416 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.416 Test: blockdev comparev and writev ...[2024-11-20 17:37:25.762019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd834000 len:0x1000 00:07:02.416 [2024-11-20 17:37:25.762075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.416 passed 00:07:02.416 Test: blockdev nvme passthru rw ...passed 00:07:02.416 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.416 Test: blockdev nvme admin passthru ...[2024-11-20 17:37:25.763570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:02.416 [2024-11-20 17:37:25.763597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:02.416 passed 00:07:02.416 Test: blockdev copy ...passed 00:07:02.416 Suite: bdevio tests on: Nvme1n1p2 00:07:02.416 Test: blockdev write read block ...passed 00:07:02.416 Test: blockdev write zeroes read block ...passed 00:07:02.416 Test: blockdev write zeroes read no split ...passed 00:07:02.416 Test: blockdev write zeroes read split ...passed 00:07:02.416 Test: blockdev write zeroes read split partial ...passed 00:07:02.416 Test: blockdev reset ...[2024-11-20 17:37:25.827546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:02.416 [2024-11-20 17:37:25.830408] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:02.417 passed 00:07:02.417 Test: blockdev write read 8 blocks ...passed 00:07:02.417 Test: blockdev write read size > 128k ...passed 00:07:02.417 Test: blockdev write read invalid size ...passed 00:07:02.417 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.417 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.417 Test: blockdev write read max offset ...passed 00:07:02.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.417 Test: blockdev writev readv 8 blocks ...passed 00:07:02.417 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.417 Test: blockdev writev readv block ...passed 00:07:02.417 Test: blockdev writev readv size > 128k ...passed 00:07:02.417 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.417 Test: blockdev comparev and writev ...[2024-11-20 17:37:25.841830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2bd830000 len:0x1000 00:07:02.417 passed 00:07:02.417 Test: blockdev nvme passthru rw ...passed 00:07:02.417 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.417 Test: blockdev nvme admin passthru ...passed 00:07:02.417 Test: blockdev copy ...[2024-11-20 17:37:25.841888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.417 passed 00:07:02.417 Suite: bdevio tests on: Nvme1n1p1 00:07:02.417 Test: blockdev write read block ...passed 00:07:02.417 Test: blockdev write zeroes read block ...passed 00:07:02.417 Test: blockdev write zeroes read no split ...passed 00:07:02.417 Test: blockdev write zeroes read split ...passed 00:07:02.417 Test: blockdev write zeroes read split partial ...passed 00:07:02.417 Test: blockdev reset ...[2024-11-20 17:37:25.889757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:02.417 [2024-11-20 17:37:25.893534] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:02.417 passed 00:07:02.417 Test: blockdev write read 8 blocks ...passed 00:07:02.417 Test: blockdev write read size > 128k ...passed 00:07:02.417 Test: blockdev write read invalid size ...passed 00:07:02.417 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.417 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.417 Test: blockdev write read max offset ...passed 00:07:02.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.417 Test: blockdev writev readv 8 blocks ...passed 00:07:02.417 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.417 Test: blockdev writev readv block ...passed 00:07:02.417 Test: blockdev writev readv size > 128k ...passed 00:07:02.417 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.417 Test: blockdev comparev and writev ...[2024-11-20 17:37:25.903883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b860e000 len:0x1000 00:07:02.417 [2024-11-20 17:37:25.903925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.417 passed 00:07:02.417 Test: blockdev nvme passthru rw ...passed 00:07:02.417 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.417 Test: blockdev nvme admin passthru ...passed 00:07:02.417 Test: blockdev copy ...passed 00:07:02.417 Suite: bdevio tests on: Nvme0n1 00:07:02.417 Test: blockdev write read block ...passed 00:07:02.417 Test: blockdev write zeroes read block ...passed 00:07:02.417 Test: blockdev write zeroes read no split ...passed 00:07:02.417 Test: blockdev write zeroes read split ...passed 00:07:02.677 Test: blockdev write zeroes read split partial ...passed 00:07:02.677 Test: blockdev reset ...[2024-11-20 17:37:25.958334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:02.677 [2024-11-20 17:37:25.961449] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:02.677 passed 00:07:02.677 Test: blockdev write read 8 blocks ...passed 00:07:02.677 Test: blockdev write read size > 128k ...passed 00:07:02.677 Test: blockdev write read invalid size ...passed 00:07:02.677 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.677 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.677 Test: blockdev write read max offset ...passed 00:07:02.677 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.677 Test: blockdev writev readv 8 blocks ...passed 00:07:02.677 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.677 Test: blockdev writev readv block ...passed 00:07:02.677 Test: blockdev writev readv size > 128k ...passed 00:07:02.677 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.677 Test: blockdev comparev and writev ...[2024-11-20 17:37:25.972818] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:02.677 separate metadata which is not supported yet. 00:07:02.677 passed 00:07:02.677 Test: blockdev nvme passthru rw ...passed 00:07:02.677 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.677 Test: blockdev nvme admin passthru ...[2024-11-20 17:37:25.973761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:02.677 [2024-11-20 17:37:25.973801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:02.677 passed 00:07:02.677 Test: blockdev copy ...passed 00:07:02.677 00:07:02.677 Run Summary: Type Total Ran Passed Failed Inactive 00:07:02.677 suites 7 7 n/a 0 0 00:07:02.677 tests 161 161 161 0 0 00:07:02.677 asserts 1025 1025 1025 0 n/a 00:07:02.677 00:07:02.677 Elapsed time = 1.447 seconds 00:07:02.677 0 00:07:02.677 17:37:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61553 00:07:02.677 17:37:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61553 ']' 00:07:02.677 17:37:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61553 00:07:02.677 17:37:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:02.677 17:37:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.677 17:37:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61553 00:07:02.677 17:37:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.677 killing process with pid 61553 00:07:02.677 17:37:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.677 17:37:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61553' 00:07:02.677 17:37:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61553 00:07:02.677 17:37:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61553 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:03.614 00:07:03.614 real 0m2.525s 00:07:03.614 user 0m6.462s 00:07:03.614 sys 0m0.329s 00:07:03.614 ************************************ 00:07:03.614 END TEST bdev_bounds 00:07:03.614 ************************************ 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:03.614 17:37:26 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:03.614 17:37:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:03.614 17:37:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.614 17:37:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.614 ************************************ 00:07:03.614 START TEST bdev_nbd 00:07:03.614 ************************************ 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61613 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61613 /var/tmp/spdk-nbd.sock 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61613 ']' 00:07:03.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.614 17:37:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:03.614 [2024-11-20 17:37:26.924348] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:03.614 [2024-11-20 17:37:26.924499] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.614 [2024-11-20 17:37:27.084447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.873 [2024-11-20 17:37:27.217974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.536 17:37:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.795 1+0 records in 00:07:04.795 1+0 records out 00:07:04.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664747 s, 6.2 MB/s 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.795 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.053 1+0 records in 00:07:05.053 1+0 records out 00:07:05.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618758 s, 6.6 MB/s 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.053 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:05.314 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:05.314 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:05.314 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:05.314 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:05.314 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.314 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.314 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.314 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:05.314 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.314 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.315 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.315 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.315 1+0 records in 00:07:05.315 1+0 records out 00:07:05.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000969721 s, 4.2 MB/s 00:07:05.315 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.315 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.315 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.315 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.315 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.315 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.315 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.315 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.576 1+0 records in 00:07:05.576 1+0 records out 00:07:05.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117701 s, 3.5 MB/s 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.576 17:37:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.852 1+0 records in 00:07:05.852 1+0 records out 00:07:05.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000762941 s, 5.4 MB/s 00:07:05.852 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.853 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.853 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.853 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.853 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.853 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.853 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.853 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:05.853 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:05.853 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:06.119 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.120 1+0 records in 00:07:06.120 1+0 records out 00:07:06.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648323 s, 6.3 MB/s 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.120 1+0 records in 00:07:06.120 1+0 records out 00:07:06.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657515 s, 6.2 MB/s 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:06.120 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.381 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd0", 00:07:06.381 "bdev_name": "Nvme0n1" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd1", 00:07:06.381 "bdev_name": "Nvme1n1p1" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd2", 00:07:06.381 "bdev_name": "Nvme1n1p2" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd3", 00:07:06.381 "bdev_name": "Nvme2n1" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd4", 00:07:06.381 "bdev_name": "Nvme2n2" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd5", 00:07:06.381 "bdev_name": "Nvme2n3" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd6", 00:07:06.381 "bdev_name": "Nvme3n1" 00:07:06.381 } 00:07:06.381 ]' 00:07:06.381 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:06.381 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd0", 00:07:06.381 "bdev_name": "Nvme0n1" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd1", 00:07:06.381 "bdev_name": "Nvme1n1p1" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd2", 00:07:06.381 "bdev_name": "Nvme1n1p2" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd3", 00:07:06.381 "bdev_name": "Nvme2n1" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd4", 00:07:06.381 "bdev_name": "Nvme2n2" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd5", 00:07:06.381 "bdev_name": "Nvme2n3" 00:07:06.381 }, 00:07:06.381 { 00:07:06.381 "nbd_device": "/dev/nbd6", 00:07:06.381 "bdev_name": "Nvme3n1" 00:07:06.381 } 00:07:06.381 ]' 00:07:06.381 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:06.381 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:06.381 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.381 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:06.381 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.381 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.381 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.381 17:37:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.642 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.642 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.642 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.642 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.642 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.642 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.642 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.642 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.642 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.642 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.902 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.902 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.902 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.902 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.902 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.902 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.902 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.902 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.902 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.902 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:07.163 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:07.163 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:07.163 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:07.163 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.163 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.163 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:07.163 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.163 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.163 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.163 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:07.427 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:07.427 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:07.427 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:07.427 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.427 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.427 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:07.427 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.427 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.427 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.427 17:37:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:07.689 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:07.689 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:07.689 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:07.689 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.689 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.689 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:07.689 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.689 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.689 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.689 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:07.949 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:07.949 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:07.949 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:07.949 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.949 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.949 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:07.949 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.949 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.949 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.949 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:08.250 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.251 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.251 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.251 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:08.251 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.251 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:08.251 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.251 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.251 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:08.514 /dev/nbd0 00:07:08.514 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.514 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.514 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:08.514 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.514 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.514 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.514 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:08.514 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.514 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.514 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.514 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.514 1+0 records in 00:07:08.514 1+0 records out 00:07:08.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010132 s, 4.0 MB/s 00:07:08.515 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.515 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.515 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.515 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.515 17:37:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.515 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.515 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.515 17:37:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:08.775 /dev/nbd1 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.775 1+0 records in 00:07:08.775 1+0 records out 00:07:08.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533436 s, 7.7 MB/s 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.775 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:09.038 /dev/nbd10 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.038 1+0 records in 00:07:09.038 1+0 records out 00:07:09.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000890744 s, 4.6 MB/s 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.038 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:09.301 /dev/nbd11 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.301 1+0 records in 00:07:09.301 1+0 records out 00:07:09.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771658 s, 5.3 MB/s 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.301 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:09.561 /dev/nbd12 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.561 1+0 records in 00:07:09.561 1+0 records out 00:07:09.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376355 s, 10.9 MB/s 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.561 17:37:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:09.823 /dev/nbd13 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.823 1+0 records in 00:07:09.823 1+0 records out 00:07:09.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000958764 s, 4.3 MB/s 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.823 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:10.085 /dev/nbd14 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.085 1+0 records in 00:07:10.085 1+0 records out 00:07:10.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570788 s, 7.2 MB/s 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.085 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd0", 00:07:10.347 "bdev_name": "Nvme0n1" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd1", 00:07:10.347 "bdev_name": "Nvme1n1p1" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd10", 00:07:10.347 "bdev_name": "Nvme1n1p2" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd11", 00:07:10.347 "bdev_name": "Nvme2n1" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd12", 00:07:10.347 "bdev_name": "Nvme2n2" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd13", 00:07:10.347 "bdev_name": "Nvme2n3" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd14", 00:07:10.347 "bdev_name": "Nvme3n1" 00:07:10.347 } 00:07:10.347 ]' 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd0", 00:07:10.347 "bdev_name": "Nvme0n1" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd1", 00:07:10.347 "bdev_name": "Nvme1n1p1" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd10", 00:07:10.347 "bdev_name": "Nvme1n1p2" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd11", 00:07:10.347 "bdev_name": "Nvme2n1" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd12", 00:07:10.347 "bdev_name": "Nvme2n2" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd13", 00:07:10.347 "bdev_name": "Nvme2n3" 00:07:10.347 }, 00:07:10.347 { 00:07:10.347 "nbd_device": "/dev/nbd14", 00:07:10.347 "bdev_name": "Nvme3n1" 00:07:10.347 } 00:07:10.347 ]' 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:10.347 /dev/nbd1 00:07:10.347 /dev/nbd10 00:07:10.347 /dev/nbd11 00:07:10.347 /dev/nbd12 00:07:10.347 /dev/nbd13 00:07:10.347 /dev/nbd14' 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:10.347 /dev/nbd1 00:07:10.347 /dev/nbd10 00:07:10.347 /dev/nbd11 00:07:10.347 /dev/nbd12 00:07:10.347 /dev/nbd13 00:07:10.347 /dev/nbd14' 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:10.347 256+0 records in 00:07:10.347 256+0 records out 00:07:10.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00934158 s, 112 MB/s 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.347 17:37:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:10.607 256+0 records in 00:07:10.607 256+0 records out 00:07:10.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.437616 s, 2.4 MB/s 00:07:10.607 17:37:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.607 17:37:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:11.181 256+0 records in 00:07:11.181 256+0 records out 00:07:11.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.324137 s, 3.2 MB/s 00:07:11.181 17:37:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.181 17:37:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:11.181 256+0 records in 00:07:11.181 256+0 records out 00:07:11.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160002 s, 6.6 MB/s 00:07:11.181 17:37:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.181 17:37:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:11.443 256+0 records in 00:07:11.443 256+0 records out 00:07:11.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222823 s, 4.7 MB/s 00:07:11.443 17:37:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.444 17:37:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:11.704 256+0 records in 00:07:11.704 256+0 records out 00:07:11.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164979 s, 6.4 MB/s 00:07:11.704 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.704 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:11.963 256+0 records in 00:07:11.963 256+0 records out 00:07:11.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.214511 s, 4.9 MB/s 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:11.963 256+0 records in 00:07:11.963 256+0 records out 00:07:11.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130556 s, 8.0 MB/s 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.963 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.224 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.224 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.224 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.224 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.224 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.224 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.224 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.224 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.224 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.224 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.486 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.486 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.486 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.486 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.486 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.486 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.486 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.486 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.486 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.486 17:37:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:12.748 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:12.748 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:12.748 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:12.748 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.748 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.748 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:12.748 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.748 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.748 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.748 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:13.009 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:13.009 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:13.009 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:13.009 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.009 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.009 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:13.009 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.009 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.009 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.009 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:13.271 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:13.271 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:13.271 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:13.271 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.271 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.271 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:13.271 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.271 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.271 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.271 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:13.532 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:13.532 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:13.532 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:13.532 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.532 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.532 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:13.532 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.532 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.532 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.532 17:37:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:13.532 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:13.532 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:13.532 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:13.532 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.532 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.532 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:13.532 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.532 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.532 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.532 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.532 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:13.793 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:14.052 malloc_lvol_verify 00:07:14.052 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:14.313 be91dd21-ea90-4f9a-81cd-4ae0d08dd3c1 00:07:14.313 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:14.573 7c8e6d58-f226-497d-a309-6d3fad45e6a6 00:07:14.573 17:37:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:14.833 /dev/nbd0 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:14.833 mke2fs 1.47.0 (5-Feb-2023) 00:07:14.833 Discarding device blocks: 0/4096 done 00:07:14.833 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:14.833 00:07:14.833 Allocating group tables: 0/1 done 00:07:14.833 Writing inode tables: 0/1 done 00:07:14.833 Creating journal (1024 blocks): done 00:07:14.833 Writing superblocks and filesystem accounting information: 0/1 done 00:07:14.833 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.833 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61613 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61613 ']' 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61613 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61613 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.094 killing process with pid 61613 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61613' 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61613 00:07:15.094 17:37:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61613 00:07:16.035 17:37:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:16.035 00:07:16.035 real 0m12.453s 00:07:16.035 user 0m16.818s 00:07:16.035 sys 0m4.129s 00:07:16.035 ************************************ 00:07:16.035 END TEST bdev_nbd 00:07:16.035 ************************************ 00:07:16.035 17:37:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.035 17:37:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:16.035 17:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:16.035 17:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:16.035 17:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:16.035 skipping fio tests on NVMe due to multi-ns failures. 00:07:16.035 17:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:16.035 17:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:16.035 17:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:16.035 17:37:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:16.035 17:37:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.035 17:37:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.035 ************************************ 00:07:16.035 START TEST bdev_verify 00:07:16.035 ************************************ 00:07:16.035 17:37:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:16.035 [2024-11-20 17:37:39.399326] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:16.035 [2024-11-20 17:37:39.399429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62040 ] 00:07:16.035 [2024-11-20 17:37:39.554493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.293 [2024-11-20 17:37:39.678745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.293 [2024-11-20 17:37:39.678747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.864 Running I/O for 5 seconds... 00:07:19.187 17792.00 IOPS, 69.50 MiB/s [2024-11-20T17:37:43.670Z] 18080.00 IOPS, 70.62 MiB/s [2024-11-20T17:37:44.613Z] 18154.67 IOPS, 70.92 MiB/s [2024-11-20T17:37:45.566Z] 18416.00 IOPS, 71.94 MiB/s [2024-11-20T17:37:45.566Z] 18444.80 IOPS, 72.05 MiB/s 00:07:22.026 Latency(us) 00:07:22.026 [2024-11-20T17:37:45.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.026 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x0 length 0xbd0bd 00:07:22.026 Nvme0n1 : 5.08 1209.62 4.73 0.00 0.00 105122.13 20568.22 103244.41 00:07:22.026 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:22.026 Nvme0n1 : 5.09 1406.98 5.50 0.00 0.00 90074.80 10233.70 76626.71 00:07:22.026 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x0 length 0x4ff80 00:07:22.026 Nvme1n1p1 : 5.11 1214.99 4.75 0.00 0.00 104710.54 14821.22 97598.23 00:07:22.026 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:22.026 Nvme1n1p1 : 5.10 1406.64 5.49 0.00 0.00 89914.11 8822.15 79449.80 00:07:22.026 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x0 length 0x4ff7f 00:07:22.026 Nvme1n1p2 : 5.11 1214.44 4.74 0.00 0.00 104611.38 12451.84 90742.15 00:07:22.026 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:22.026 Nvme1n1p2 : 5.10 1405.92 5.49 0.00 0.00 89789.84 9931.22 81466.29 00:07:22.026 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x0 length 0x80000 00:07:22.026 Nvme2n1 : 5.11 1213.95 4.74 0.00 0.00 104500.48 10838.65 88322.36 00:07:22.026 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x80000 length 0x80000 00:07:22.026 Nvme2n1 : 5.10 1405.61 5.49 0.00 0.00 89630.96 10334.52 85499.27 00:07:22.026 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x0 length 0x80000 00:07:22.026 Nvme2n2 : 5.12 1212.92 4.74 0.00 0.00 104362.29 13510.50 95178.44 00:07:22.026 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x80000 length 0x80000 00:07:22.026 Nvme2n2 : 5.05 1393.68 5.44 0.00 0.00 91444.89 17341.83 98001.53 00:07:22.026 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x0 length 0x80000 00:07:22.026 Nvme2n3 : 5.12 1211.89 4.73 0.00 0.00 104220.45 16434.41 102841.11 00:07:22.026 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x80000 length 0x80000 00:07:22.026 Nvme2n3 : 5.05 1393.33 5.44 0.00 0.00 91183.36 19761.62 82272.89 00:07:22.026 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.026 Verification LBA range: start 0x0 length 0x20000 00:07:22.027 Nvme3n1 : 5.13 1210.87 4.73 0.00 0.00 104066.69 18955.03 105664.20 00:07:22.027 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.027 Verification LBA range: start 0x20000 length 0x20000 00:07:22.027 Nvme3n1 : 5.08 1397.89 5.46 0.00 0.00 90649.21 6553.60 78643.20 00:07:22.027 [2024-11-20T17:37:45.567Z] =================================================================================================================== 00:07:22.027 [2024-11-20T17:37:45.567Z] Total : 18298.71 71.48 0.00 0.00 96955.76 6553.60 105664.20 00:07:23.408 00:07:23.408 real 0m7.369s 00:07:23.408 user 0m13.742s 00:07:23.408 sys 0m0.246s 00:07:23.408 ************************************ 00:07:23.408 END TEST bdev_verify 00:07:23.408 ************************************ 00:07:23.408 17:37:46 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.408 17:37:46 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 17:37:46 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:23.408 17:37:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:23.408 17:37:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.408 17:37:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.408 ************************************ 00:07:23.408 START TEST bdev_verify_big_io 00:07:23.408 ************************************ 00:07:23.408 17:37:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:23.408 [2024-11-20 17:37:46.838154] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:23.408 [2024-11-20 17:37:46.838274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62139 ] 00:07:23.669 [2024-11-20 17:37:46.998937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.669 [2024-11-20 17:37:47.119291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.669 [2024-11-20 17:37:47.119495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.614 Running I/O for 5 seconds... 00:07:30.735 1652.00 IOPS, 103.25 MiB/s [2024-11-20T17:37:54.275Z] 2898.00 IOPS, 181.12 MiB/s 00:07:30.735 Latency(us) 00:07:30.735 [2024-11-20T17:37:54.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.735 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x0 length 0xbd0b 00:07:30.735 Nvme0n1 : 5.85 97.32 6.08 0.00 0.00 1263467.28 26819.35 1458327.24 00:07:30.735 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:30.735 Nvme0n1 : 5.77 88.75 5.55 0.00 0.00 1346549.37 17341.83 1613193.85 00:07:30.735 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x0 length 0x4ff8 00:07:30.735 Nvme1n1p1 : 5.85 98.47 6.15 0.00 0.00 1211325.83 102437.81 1071160.71 00:07:30.735 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:30.735 Nvme1n1p1 : 5.77 93.05 5.82 0.00 0.00 1247613.93 137121.48 1348630.06 00:07:30.735 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x0 length 0x4ff7 00:07:30.735 Nvme1n1p2 : 6.02 101.99 6.37 0.00 0.00 1133151.76 69770.63 1090519.04 00:07:30.735 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:30.735 Nvme1n1p2 : 6.09 101.27 6.33 0.00 0.00 1095589.36 137928.07 1084066.26 00:07:30.735 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x0 length 0x8000 00:07:30.735 Nvme2n1 : 6.02 101.64 6.35 0.00 0.00 1100142.85 70173.93 1116330.14 00:07:30.735 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x8000 length 0x8000 00:07:30.735 Nvme2n1 : 6.13 107.08 6.69 0.00 0.00 1003732.72 31255.63 1780966.01 00:07:30.735 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x0 length 0x8000 00:07:30.735 Nvme2n2 : 6.02 106.25 6.64 0.00 0.00 1033485.71 96791.63 1148594.02 00:07:30.735 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x8000 length 0x8000 00:07:30.735 Nvme2n2 : 6.19 116.72 7.30 0.00 0.00 879118.10 17745.13 1806777.11 00:07:30.735 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x0 length 0x8000 00:07:30.735 Nvme2n3 : 6.10 115.38 7.21 0.00 0.00 928258.58 20568.22 1180857.90 00:07:30.735 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x8000 length 0x8000 00:07:30.735 Nvme2n3 : 6.30 120.42 7.53 0.00 0.00 844430.43 11393.18 2877937.82 00:07:30.735 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x0 length 0x2000 00:07:30.735 Nvme3n1 : 6.11 125.63 7.85 0.00 0.00 827152.02 3327.21 1213121.77 00:07:30.735 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.735 Verification LBA range: start 0x2000 length 0x2000 00:07:30.735 Nvme3n1 : 6.38 199.19 12.45 0.00 0.00 492632.19 321.38 2619826.81 00:07:30.735 [2024-11-20T17:37:54.275Z] =================================================================================================================== 00:07:30.735 [2024-11-20T17:37:54.275Z] Total : 1573.15 98.32 0.00 0.00 976451.96 321.38 2877937.82 00:07:36.023 00:07:36.023 real 0m11.874s 00:07:36.023 user 0m22.690s 00:07:36.023 sys 0m0.304s 00:07:36.023 17:37:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.023 17:37:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:36.023 ************************************ 00:07:36.023 END TEST bdev_verify_big_io 00:07:36.023 ************************************ 00:07:36.023 17:37:58 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.023 17:37:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:36.023 17:37:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.023 17:37:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.023 ************************************ 00:07:36.023 START TEST bdev_write_zeroes 00:07:36.023 ************************************ 00:07:36.023 17:37:58 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.023 [2024-11-20 17:37:58.757169] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:36.023 [2024-11-20 17:37:58.757295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62260 ] 00:07:36.023 [2024-11-20 17:37:58.917075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.023 [2024-11-20 17:37:59.019251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.284 Running I/O for 1 seconds... 00:07:37.490 3129.00 IOPS, 12.22 MiB/s 00:07:37.490 Latency(us) 00:07:37.490 [2024-11-20T17:38:01.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.490 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:37.490 Nvme0n1 : 1.21 335.29 1.31 0.00 0.00 349304.71 6654.42 1000180.18 00:07:37.490 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:37.490 Nvme1n1p1 : 1.10 697.14 2.72 0.00 0.00 178751.21 10334.52 916294.10 00:07:37.490 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:37.490 Nvme1n1p2 : 1.10 638.41 2.49 0.00 0.00 198899.33 16131.94 916294.10 00:07:37.490 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:37.490 Nvme2n1 : 1.10 579.86 2.27 0.00 0.00 218972.79 17039.36 929199.66 00:07:37.490 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:37.490 Nvme2n2 : 1.10 579.38 2.26 0.00 0.00 218502.07 15426.17 929199.66 00:07:37.490 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:37.490 Nvme2n3 : 1.11 578.88 2.26 0.00 0.00 218425.03 14317.10 922746.88 00:07:37.490 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:37.490 Nvme3n1 : 1.11 578.41 2.26 0.00 0.00 218388.48 13712.15 922746.88 00:07:37.490 [2024-11-20T17:38:01.030Z] =================================================================================================================== 00:07:37.490 [2024-11-20T17:38:01.030Z] Total : 3987.37 15.58 0.00 0.00 220549.14 6654.42 1000180.18 00:07:38.433 00:07:38.433 real 0m3.086s 00:07:38.433 user 0m2.774s 00:07:38.433 sys 0m0.197s 00:07:38.433 17:38:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.433 17:38:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:38.433 ************************************ 00:07:38.433 END TEST bdev_write_zeroes 00:07:38.433 ************************************ 00:07:38.433 17:38:01 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.433 17:38:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:38.433 17:38:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.433 17:38:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.433 ************************************ 00:07:38.433 START TEST bdev_json_nonenclosed 00:07:38.433 ************************************ 00:07:38.433 17:38:01 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.433 [2024-11-20 17:38:01.880245] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:38.433 [2024-11-20 17:38:01.880367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62313 ] 00:07:38.694 [2024-11-20 17:38:02.036934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.694 [2024-11-20 17:38:02.124490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.694 [2024-11-20 17:38:02.124564] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:38.694 [2024-11-20 17:38:02.124577] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:38.694 [2024-11-20 17:38:02.124585] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.955 00:07:38.955 real 0m0.463s 00:07:38.955 user 0m0.266s 00:07:38.955 sys 0m0.093s 00:07:38.955 17:38:02 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.955 ************************************ 00:07:38.955 END TEST bdev_json_nonenclosed 00:07:38.955 ************************************ 00:07:38.955 17:38:02 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:38.955 17:38:02 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.955 17:38:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:38.955 17:38:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.955 17:38:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.955 ************************************ 00:07:38.955 START TEST bdev_json_nonarray 00:07:38.955 ************************************ 00:07:38.955 17:38:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.955 [2024-11-20 17:38:02.400276] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:38.955 [2024-11-20 17:38:02.400447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62344 ] 00:07:39.264 [2024-11-20 17:38:02.575138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.264 [2024-11-20 17:38:02.662594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.264 [2024-11-20 17:38:02.662676] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:39.264 [2024-11-20 17:38:02.662692] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:39.264 [2024-11-20 17:38:02.662699] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.563 00:07:39.563 real 0m0.499s 00:07:39.563 user 0m0.285s 00:07:39.563 sys 0m0.109s 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.563 ************************************ 00:07:39.563 END TEST bdev_json_nonarray 00:07:39.563 ************************************ 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 17:38:02 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:39.563 17:38:02 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:39.563 17:38:02 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:39.563 17:38:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.563 17:38:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.563 17:38:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 ************************************ 00:07:39.563 START TEST bdev_gpt_uuid 00:07:39.563 ************************************ 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62364 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62364 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62364 ']' 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 17:38:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:39.563 [2024-11-20 17:38:02.918029] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:39.563 [2024-11-20 17:38:02.918143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62364 ] 00:07:39.563 [2024-11-20 17:38:03.068831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.825 [2024-11-20 17:38:03.155622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.396 17:38:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.396 17:38:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:40.396 17:38:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:40.396 17:38:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.396 17:38:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:40.658 Some configs were skipped because the RPC state that can call them passed over. 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:40.658 { 00:07:40.658 "name": "Nvme1n1p1", 00:07:40.658 "aliases": [ 00:07:40.658 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:40.658 ], 00:07:40.658 "product_name": "GPT Disk", 00:07:40.658 "block_size": 4096, 00:07:40.658 "num_blocks": 655104, 00:07:40.658 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:40.658 "assigned_rate_limits": { 00:07:40.658 "rw_ios_per_sec": 0, 00:07:40.658 "rw_mbytes_per_sec": 0, 00:07:40.658 "r_mbytes_per_sec": 0, 00:07:40.658 "w_mbytes_per_sec": 0 00:07:40.658 }, 00:07:40.658 "claimed": false, 00:07:40.658 "zoned": false, 00:07:40.658 "supported_io_types": { 00:07:40.658 "read": true, 00:07:40.658 "write": true, 00:07:40.658 "unmap": true, 00:07:40.658 "flush": true, 00:07:40.658 "reset": true, 00:07:40.658 "nvme_admin": false, 00:07:40.658 "nvme_io": false, 00:07:40.658 "nvme_io_md": false, 00:07:40.658 "write_zeroes": true, 00:07:40.658 "zcopy": false, 00:07:40.658 "get_zone_info": false, 00:07:40.658 "zone_management": false, 00:07:40.658 "zone_append": false, 00:07:40.658 "compare": true, 00:07:40.658 "compare_and_write": false, 00:07:40.658 "abort": true, 00:07:40.658 "seek_hole": false, 00:07:40.658 "seek_data": false, 00:07:40.658 "copy": true, 00:07:40.658 "nvme_iov_md": false 00:07:40.658 }, 00:07:40.658 "driver_specific": { 00:07:40.658 "gpt": { 00:07:40.658 "base_bdev": "Nvme1n1", 00:07:40.658 "offset_blocks": 256, 00:07:40.658 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:40.658 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:40.658 "partition_name": "SPDK_TEST_first" 00:07:40.658 } 00:07:40.658 } 00:07:40.658 } 00:07:40.658 ]' 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:40.658 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:40.920 { 00:07:40.920 "name": "Nvme1n1p2", 00:07:40.920 "aliases": [ 00:07:40.920 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:40.920 ], 00:07:40.920 "product_name": "GPT Disk", 00:07:40.920 "block_size": 4096, 00:07:40.920 "num_blocks": 655103, 00:07:40.920 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:40.920 "assigned_rate_limits": { 00:07:40.920 "rw_ios_per_sec": 0, 00:07:40.920 "rw_mbytes_per_sec": 0, 00:07:40.920 "r_mbytes_per_sec": 0, 00:07:40.920 "w_mbytes_per_sec": 0 00:07:40.920 }, 00:07:40.920 "claimed": false, 00:07:40.920 "zoned": false, 00:07:40.920 "supported_io_types": { 00:07:40.920 "read": true, 00:07:40.920 "write": true, 00:07:40.920 "unmap": true, 00:07:40.920 "flush": true, 00:07:40.920 "reset": true, 00:07:40.920 "nvme_admin": false, 00:07:40.920 "nvme_io": false, 00:07:40.920 "nvme_io_md": false, 00:07:40.920 "write_zeroes": true, 00:07:40.920 "zcopy": false, 00:07:40.920 "get_zone_info": false, 00:07:40.920 "zone_management": false, 00:07:40.920 "zone_append": false, 00:07:40.920 "compare": true, 00:07:40.920 "compare_and_write": false, 00:07:40.920 "abort": true, 00:07:40.920 "seek_hole": false, 00:07:40.920 "seek_data": false, 00:07:40.920 "copy": true, 00:07:40.920 "nvme_iov_md": false 00:07:40.920 }, 00:07:40.920 "driver_specific": { 00:07:40.920 "gpt": { 00:07:40.920 "base_bdev": "Nvme1n1", 00:07:40.920 "offset_blocks": 655360, 00:07:40.920 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:40.920 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:40.920 "partition_name": "SPDK_TEST_second" 00:07:40.920 } 00:07:40.920 } 00:07:40.920 } 00:07:40.920 ]' 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:40.920 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62364 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62364 ']' 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62364 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62364 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.921 killing process with pid 62364 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62364' 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62364 00:07:40.921 17:38:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62364 00:07:42.306 00:07:42.306 real 0m2.755s 00:07:42.306 user 0m2.928s 00:07:42.306 sys 0m0.366s 00:07:42.306 17:38:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.306 ************************************ 00:07:42.306 17:38:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.306 END TEST bdev_gpt_uuid 00:07:42.306 ************************************ 00:07:42.306 17:38:05 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:42.306 17:38:05 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:42.306 17:38:05 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:42.306 17:38:05 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:42.306 17:38:05 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:42.306 17:38:05 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:42.306 17:38:05 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:42.306 17:38:05 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:42.306 17:38:05 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:42.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:42.568 Waiting for block devices as requested 00:07:42.568 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.879 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.879 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.879 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:48.200 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:48.200 17:38:11 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:48.200 17:38:11 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:48.464 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:48.464 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:48.464 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:48.464 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:48.464 ************************************ 00:07:48.464 END TEST blockdev_nvme_gpt 00:07:48.464 ************************************ 00:07:48.464 17:38:11 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:48.464 00:07:48.464 real 1m1.688s 00:07:48.464 user 1m20.816s 00:07:48.464 sys 0m8.541s 00:07:48.464 17:38:11 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.464 17:38:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 17:38:11 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:48.464 17:38:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.464 17:38:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.464 17:38:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 ************************************ 00:07:48.464 START TEST nvme 00:07:48.464 ************************************ 00:07:48.464 17:38:11 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:48.464 * Looking for test storage... 00:07:48.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:48.464 17:38:11 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:48.464 17:38:11 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:48.464 17:38:11 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:48.464 17:38:11 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:48.464 17:38:11 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.464 17:38:11 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.464 17:38:11 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.464 17:38:11 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.464 17:38:11 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.464 17:38:11 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.464 17:38:11 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.464 17:38:11 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.464 17:38:11 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.464 17:38:11 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.464 17:38:11 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.464 17:38:11 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:48.464 17:38:11 nvme -- scripts/common.sh@345 -- # : 1 00:07:48.464 17:38:11 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.464 17:38:11 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.464 17:38:11 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:48.464 17:38:11 nvme -- scripts/common.sh@353 -- # local d=1 00:07:48.464 17:38:11 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.464 17:38:11 nvme -- scripts/common.sh@355 -- # echo 1 00:07:48.464 17:38:11 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.464 17:38:11 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:48.464 17:38:11 nvme -- scripts/common.sh@353 -- # local d=2 00:07:48.464 17:38:11 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.464 17:38:11 nvme -- scripts/common.sh@355 -- # echo 2 00:07:48.464 17:38:11 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.464 17:38:11 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.464 17:38:11 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.464 17:38:11 nvme -- scripts/common.sh@368 -- # return 0 00:07:48.464 17:38:11 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.464 17:38:11 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:48.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.464 --rc genhtml_branch_coverage=1 00:07:48.464 --rc genhtml_function_coverage=1 00:07:48.464 --rc genhtml_legend=1 00:07:48.464 --rc geninfo_all_blocks=1 00:07:48.465 --rc geninfo_unexecuted_blocks=1 00:07:48.465 00:07:48.465 ' 00:07:48.465 17:38:11 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:48.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.465 --rc genhtml_branch_coverage=1 00:07:48.465 --rc genhtml_function_coverage=1 00:07:48.465 --rc genhtml_legend=1 00:07:48.465 --rc geninfo_all_blocks=1 00:07:48.465 --rc geninfo_unexecuted_blocks=1 00:07:48.465 00:07:48.465 ' 00:07:48.465 17:38:11 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:48.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.465 --rc genhtml_branch_coverage=1 00:07:48.465 --rc genhtml_function_coverage=1 00:07:48.465 --rc genhtml_legend=1 00:07:48.465 --rc geninfo_all_blocks=1 00:07:48.465 --rc geninfo_unexecuted_blocks=1 00:07:48.465 00:07:48.465 ' 00:07:48.465 17:38:11 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:48.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.465 --rc genhtml_branch_coverage=1 00:07:48.465 --rc genhtml_function_coverage=1 00:07:48.465 --rc genhtml_legend=1 00:07:48.465 --rc geninfo_all_blocks=1 00:07:48.465 --rc geninfo_unexecuted_blocks=1 00:07:48.465 00:07:48.465 ' 00:07:48.465 17:38:11 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:49.032 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:49.601 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.601 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.601 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.601 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.601 17:38:13 nvme -- nvme/nvme.sh@79 -- # uname 00:07:49.601 17:38:13 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:49.601 17:38:13 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:49.601 17:38:13 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:49.601 17:38:13 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:49.601 17:38:13 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:49.601 17:38:13 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:49.601 17:38:13 nvme -- common/autotest_common.sh@1075 -- # stubpid=62999 00:07:49.601 Waiting for stub to ready for secondary processes... 00:07:49.601 17:38:13 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:49.601 17:38:13 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:49.601 17:38:13 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:49.601 17:38:13 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62999 ]] 00:07:49.601 17:38:13 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:49.602 [2024-11-20 17:38:13.056846] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:07:49.602 [2024-11-20 17:38:13.057101] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:50.539 [2024-11-20 17:38:13.858790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.539 [2024-11-20 17:38:13.956350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.539 [2024-11-20 17:38:13.956565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.539 [2024-11-20 17:38:13.956592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.539 [2024-11-20 17:38:13.969735] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:50.539 [2024-11-20 17:38:13.969784] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:50.540 [2024-11-20 17:38:13.979593] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:50.540 [2024-11-20 17:38:13.979669] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:50.540 [2024-11-20 17:38:13.981410] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:50.540 [2024-11-20 17:38:13.981541] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:50.540 [2024-11-20 17:38:13.981583] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:50.540 [2024-11-20 17:38:13.983984] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:50.540 [2024-11-20 17:38:13.984290] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:50.540 [2024-11-20 17:38:13.984436] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:50.540 [2024-11-20 17:38:13.989638] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:50.540 [2024-11-20 17:38:13.990140] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:50.540 [2024-11-20 17:38:13.990330] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:50.540 [2024-11-20 17:38:13.990436] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:50.540 [2024-11-20 17:38:13.990541] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:50.540 17:38:14 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:50.540 done. 00:07:50.540 17:38:14 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:50.540 17:38:14 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:50.540 17:38:14 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:50.540 17:38:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.540 17:38:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.540 ************************************ 00:07:50.540 START TEST nvme_reset 00:07:50.540 ************************************ 00:07:50.540 17:38:14 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:50.799 Initializing NVMe Controllers 00:07:50.799 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:50.799 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:50.799 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:50.799 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:50.799 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:50.799 00:07:50.799 real 0m0.215s 00:07:50.799 user 0m0.073s 00:07:50.799 sys 0m0.090s 00:07:50.799 17:38:14 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.799 ************************************ 00:07:50.799 END TEST nvme_reset 00:07:50.799 ************************************ 00:07:50.799 17:38:14 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:50.799 17:38:14 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:50.799 17:38:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.799 17:38:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.799 17:38:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.799 ************************************ 00:07:50.799 START TEST nvme_identify 00:07:50.799 ************************************ 00:07:50.799 17:38:14 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:50.799 17:38:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:50.799 17:38:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:50.799 17:38:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:50.799 17:38:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:50.799 17:38:14 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:50.799 17:38:14 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:50.799 17:38:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:50.799 17:38:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:50.799 17:38:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:51.061 17:38:14 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:51.061 17:38:14 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:51.061 17:38:14 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:51.061 [2024-11-20 17:38:14.584272] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63020 terminated unexpected 00:07:51.061 ===================================================== 00:07:51.061 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:51.061 ===================================================== 00:07:51.061 Controller Capabilities/Features 00:07:51.061 ================================ 00:07:51.061 Vendor ID: 1b36 00:07:51.061 Subsystem Vendor ID: 1af4 00:07:51.061 Serial Number: 12340 00:07:51.061 Model Number: QEMU NVMe Ctrl 00:07:51.061 Firmware Version: 8.0.0 00:07:51.061 Recommended Arb Burst: 6 00:07:51.061 IEEE OUI Identifier: 00 54 52 00:07:51.061 Multi-path I/O 00:07:51.061 May have multiple subsystem ports: No 00:07:51.061 May have multiple controllers: No 00:07:51.061 Associated with SR-IOV VF: No 00:07:51.061 Max Data Transfer Size: 524288 00:07:51.061 Max Number of Namespaces: 256 00:07:51.061 Max Number of I/O Queues: 64 00:07:51.061 NVMe Specification Version (VS): 1.4 00:07:51.061 NVMe Specification Version (Identify): 1.4 00:07:51.061 Maximum Queue Entries: 2048 00:07:51.061 Contiguous Queues Required: Yes 00:07:51.061 Arbitration Mechanisms Supported 00:07:51.061 Weighted Round Robin: Not Supported 00:07:51.061 Vendor Specific: Not Supported 00:07:51.061 Reset Timeout: 7500 ms 00:07:51.061 Doorbell Stride: 4 bytes 00:07:51.061 NVM Subsystem Reset: Not Supported 00:07:51.061 Command Sets Supported 00:07:51.061 NVM Command Set: Supported 00:07:51.061 Boot Partition: Not Supported 00:07:51.061 Memory Page Size Minimum: 4096 bytes 00:07:51.061 Memory Page Size Maximum: 65536 bytes 00:07:51.061 Persistent Memory Region: Not Supported 00:07:51.061 Optional Asynchronous Events Supported 00:07:51.061 Namespace Attribute Notices: Supported 00:07:51.061 Firmware Activation Notices: Not Supported 00:07:51.061 ANA Change Notices: Not Supported 00:07:51.061 PLE Aggregate Log Change Notices: Not Supported 00:07:51.061 LBA Status Info Alert Notices: Not Supported 00:07:51.061 EGE Aggregate Log Change Notices: Not Supported 00:07:51.061 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.061 Zone Descriptor Change Notices: Not Supported 00:07:51.061 Discovery Log Change Notices: Not Supported 00:07:51.061 Controller Attributes 00:07:51.061 128-bit Host Identifier: Not Supported 00:07:51.061 Non-Operational Permissive Mode: Not Supported 00:07:51.061 NVM Sets: Not Supported 00:07:51.061 Read Recovery Levels: Not Supported 00:07:51.061 Endurance Groups: Not Supported 00:07:51.061 Predictable Latency Mode: Not Supported 00:07:51.061 Traffic Based Keep ALive: Not Supported 00:07:51.061 Namespace Granularity: Not Supported 00:07:51.061 SQ Associations: Not Supported 00:07:51.061 UUID List: Not Supported 00:07:51.061 Multi-Domain Subsystem: Not Supported 00:07:51.061 Fixed Capacity Management: Not Supported 00:07:51.061 Variable Capacity Management: Not Supported 00:07:51.062 Delete Endurance Group: Not Supported 00:07:51.062 Delete NVM Set: Not Supported 00:07:51.062 Extended LBA Formats Supported: Supported 00:07:51.062 Flexible Data Placement Supported: Not Supported 00:07:51.062 00:07:51.062 Controller Memory Buffer Support 00:07:51.062 ================================ 00:07:51.062 Supported: No 00:07:51.062 00:07:51.062 Persistent Memory Region Support 00:07:51.062 ================================ 00:07:51.062 Supported: No 00:07:51.062 00:07:51.062 Admin Command Set Attributes 00:07:51.062 ============================ 00:07:51.062 Security Send/Receive: Not Supported 00:07:51.062 Format NVM: Supported 00:07:51.062 Firmware Activate/Download: Not Supported 00:07:51.062 Namespace Management: Supported 00:07:51.062 Device Self-Test: Not Supported 00:07:51.062 Directives: Supported 00:07:51.062 NVMe-MI: Not Supported 00:07:51.062 Virtualization Management: Not Supported 00:07:51.062 Doorbell Buffer Config: Supported 00:07:51.062 Get LBA Status Capability: Not Supported 00:07:51.062 Command & Feature Lockdown Capability: Not Supported 00:07:51.062 Abort Command Limit: 4 00:07:51.062 Async Event Request Limit: 4 00:07:51.062 Number of Firmware Slots: N/A 00:07:51.062 Firmware Slot 1 Read-Only: N/A 00:07:51.062 Firmware Activation Without Reset: N/A 00:07:51.062 Multiple Update Detection Support: N/A 00:07:51.062 Firmware Update Granularity: No Information Provided 00:07:51.062 Per-Namespace SMART Log: Yes 00:07:51.062 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.062 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:51.062 Command Effects Log Page: Supported 00:07:51.062 Get Log Page Extended Data: Supported 00:07:51.062 Telemetry Log Pages: Not Supported 00:07:51.062 Persistent Event Log Pages: Not Supported 00:07:51.062 Supported Log Pages Log Page: May Support 00:07:51.062 Commands Supported & Effects Log Page: Not Supported 00:07:51.062 Feature Identifiers & Effects Log Page:May Support 00:07:51.062 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.062 Data Area 4 for Telemetry Log: Not Supported 00:07:51.062 Error Log Page Entries Supported: 1 00:07:51.062 Keep Alive: Not Supported 00:07:51.062 00:07:51.062 NVM Command Set Attributes 00:07:51.062 ========================== 00:07:51.062 Submission Queue Entry Size 00:07:51.062 Max: 64 00:07:51.062 Min: 64 00:07:51.062 Completion Queue Entry Size 00:07:51.062 Max: 16 00:07:51.062 Min: 16 00:07:51.062 Number of Namespaces: 256 00:07:51.062 Compare Command: Supported 00:07:51.062 Write Uncorrectable Command: Not Supported 00:07:51.062 Dataset Management Command: Supported 00:07:51.062 Write Zeroes Command: Supported 00:07:51.062 Set Features Save Field: Supported 00:07:51.062 Reservations: Not Supported 00:07:51.062 Timestamp: Supported 00:07:51.062 Copy: Supported 00:07:51.062 Volatile Write Cache: Present 00:07:51.062 Atomic Write Unit (Normal): 1 00:07:51.062 Atomic Write Unit (PFail): 1 00:07:51.062 Atomic Compare & Write Unit: 1 00:07:51.062 Fused Compare & Write: Not Supported 00:07:51.062 Scatter-Gather List 00:07:51.062 SGL Command Set: Supported 00:07:51.062 SGL Keyed: Not Supported 00:07:51.062 SGL Bit Bucket Descriptor: Not Supported 00:07:51.062 SGL Metadata Pointer: Not Supported 00:07:51.062 Oversized SGL: Not Supported 00:07:51.062 SGL Metadata Address: Not Supported 00:07:51.062 SGL Offset: Not Supported 00:07:51.062 Transport SGL Data Block: Not Supported 00:07:51.062 Replay Protected Memory Block: Not Supported 00:07:51.062 00:07:51.062 Firmware Slot Information 00:07:51.062 ========================= 00:07:51.062 Active slot: 1 00:07:51.062 Slot 1 Firmware Revision: 1.0 00:07:51.062 00:07:51.062 00:07:51.062 Commands Supported and Effects 00:07:51.062 ============================== 00:07:51.062 Admin Commands 00:07:51.062 -------------- 00:07:51.062 Delete I/O Submission Queue (00h): Supported 00:07:51.062 Create I/O Submission Queue (01h): Supported 00:07:51.062 Get Log Page (02h): Supported 00:07:51.062 Delete I/O Completion Queue (04h): Supported 00:07:51.062 Create I/O Completion Queue (05h): Supported 00:07:51.062 Identify (06h): Supported 00:07:51.062 Abort (08h): Supported 00:07:51.062 Set Features (09h): Supported 00:07:51.062 Get Features (0Ah): Supported 00:07:51.062 Asynchronous Event Request (0Ch): Supported 00:07:51.062 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.062 Directive Send (19h): Supported 00:07:51.062 Directive Receive (1Ah): Supported 00:07:51.062 Virtualization Management (1Ch): Supported 00:07:51.062 Doorbell Buffer Config (7Ch): Supported 00:07:51.062 Format NVM (80h): Supported LBA-Change 00:07:51.062 I/O Commands 00:07:51.062 ------------ 00:07:51.062 Flush (00h): Supported LBA-Change 00:07:51.062 Write (01h): Supported LBA-Change 00:07:51.062 Read (02h): Supported 00:07:51.062 Compare (05h): Supported 00:07:51.062 Write Zeroes (08h): Supported LBA-Change 00:07:51.062 Dataset Management (09h): Supported LBA-Change 00:07:51.062 Unknown (0Ch): Supported 00:07:51.062 Unknown (12h): Supported 00:07:51.062 Copy (19h): Supported LBA-Change 00:07:51.062 Unknown (1Dh): Supported LBA-Change 00:07:51.062 00:07:51.062 Error Log 00:07:51.062 ========= 00:07:51.062 00:07:51.062 Arbitration 00:07:51.062 =========== 00:07:51.062 Arbitration Burst: no limit 00:07:51.062 00:07:51.062 Power Management 00:07:51.062 ================ 00:07:51.062 Number of Power States: 1 00:07:51.062 Current Power State: Power State #0 00:07:51.062 Power State #0: 00:07:51.062 Max Power: 25.00 W 00:07:51.062 Non-Operational State: Operational 00:07:51.062 Entry Latency: 16 microseconds 00:07:51.062 Exit Latency: 4 microseconds 00:07:51.062 Relative Read Throughput: 0 00:07:51.062 Relative Read Latency: 0 00:07:51.062 Relative Write Throughput: 0 00:07:51.062 Relative Write Latency: 0 00:07:51.062 Idle Power[2024-11-20 17:38:14.585900] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63020 terminated unexpected 00:07:51.062 : Not Reported 00:07:51.062 Active Power: Not Reported 00:07:51.062 Non-Operational Permissive Mode: Not Supported 00:07:51.062 00:07:51.062 Health Information 00:07:51.062 ================== 00:07:51.062 Critical Warnings: 00:07:51.062 Available Spare Space: OK 00:07:51.062 Temperature: OK 00:07:51.062 Device Reliability: OK 00:07:51.062 Read Only: No 00:07:51.062 Volatile Memory Backup: OK 00:07:51.062 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.062 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.062 Available Spare: 0% 00:07:51.063 Available Spare Threshold: 0% 00:07:51.063 Life Percentage Used: 0% 00:07:51.063 Data Units Read: 620 00:07:51.063 Data Units Written: 548 00:07:51.063 Host Read Commands: 33361 00:07:51.063 Host Write Commands: 33147 00:07:51.063 Controller Busy Time: 0 minutes 00:07:51.063 Power Cycles: 0 00:07:51.063 Power On Hours: 0 hours 00:07:51.063 Unsafe Shutdowns: 0 00:07:51.063 Unrecoverable Media Errors: 0 00:07:51.063 Lifetime Error Log Entries: 0 00:07:51.063 Warning Temperature Time: 0 minutes 00:07:51.063 Critical Temperature Time: 0 minutes 00:07:51.063 00:07:51.063 Number of Queues 00:07:51.063 ================ 00:07:51.063 Number of I/O Submission Queues: 64 00:07:51.063 Number of I/O Completion Queues: 64 00:07:51.063 00:07:51.063 ZNS Specific Controller Data 00:07:51.063 ============================ 00:07:51.063 Zone Append Size Limit: 0 00:07:51.063 00:07:51.063 00:07:51.063 Active Namespaces 00:07:51.063 ================= 00:07:51.063 Namespace ID:1 00:07:51.063 Error Recovery Timeout: Unlimited 00:07:51.063 Command Set Identifier: NVM (00h) 00:07:51.063 Deallocate: Supported 00:07:51.063 Deallocated/Unwritten Error: Supported 00:07:51.063 Deallocated Read Value: All 0x00 00:07:51.063 Deallocate in Write Zeroes: Not Supported 00:07:51.063 Deallocated Guard Field: 0xFFFF 00:07:51.063 Flush: Supported 00:07:51.063 Reservation: Not Supported 00:07:51.063 Metadata Transferred as: Separate Metadata Buffer 00:07:51.063 Namespace Sharing Capabilities: Private 00:07:51.063 Size (in LBAs): 1548666 (5GiB) 00:07:51.063 Capacity (in LBAs): 1548666 (5GiB) 00:07:51.063 Utilization (in LBAs): 1548666 (5GiB) 00:07:51.063 Thin Provisioning: Not Supported 00:07:51.063 Per-NS Atomic Units: No 00:07:51.063 Maximum Single Source Range Length: 128 00:07:51.063 Maximum Copy Length: 128 00:07:51.063 Maximum Source Range Count: 128 00:07:51.063 NGUID/EUI64 Never Reused: No 00:07:51.063 Namespace Write Protected: No 00:07:51.063 Number of LBA Formats: 8 00:07:51.063 Current LBA Format: LBA Format #07 00:07:51.063 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.063 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.063 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.063 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.063 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.063 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.063 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.063 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.063 00:07:51.063 NVM Specific Namespace Data 00:07:51.063 =========================== 00:07:51.063 Logical Block Storage Tag Mask: 0 00:07:51.063 Protection Information Capabilities: 00:07:51.063 16b Guard Protection Information Storage Tag Support: No 00:07:51.063 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.063 Storage Tag Check Read Support: No 00:07:51.063 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.063 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.063 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.063 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.063 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.063 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.063 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.063 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.063 ===================================================== 00:07:51.063 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.063 ===================================================== 00:07:51.063 Controller Capabilities/Features 00:07:51.063 ================================ 00:07:51.063 Vendor ID: 1b36 00:07:51.063 Subsystem Vendor ID: 1af4 00:07:51.063 Serial Number: 12341 00:07:51.063 Model Number: QEMU NVMe Ctrl 00:07:51.063 Firmware Version: 8.0.0 00:07:51.063 Recommended Arb Burst: 6 00:07:51.063 IEEE OUI Identifier: 00 54 52 00:07:51.063 Multi-path I/O 00:07:51.063 May have multiple subsystem ports: No 00:07:51.063 May have multiple controllers: No 00:07:51.063 Associated with SR-IOV VF: No 00:07:51.063 Max Data Transfer Size: 524288 00:07:51.063 Max Number of Namespaces: 256 00:07:51.063 Max Number of I/O Queues: 64 00:07:51.063 NVMe Specification Version (VS): 1.4 00:07:51.063 NVMe Specification Version (Identify): 1.4 00:07:51.063 Maximum Queue Entries: 2048 00:07:51.063 Contiguous Queues Required: Yes 00:07:51.063 Arbitration Mechanisms Supported 00:07:51.063 Weighted Round Robin: Not Supported 00:07:51.063 Vendor Specific: Not Supported 00:07:51.063 Reset Timeout: 7500 ms 00:07:51.063 Doorbell Stride: 4 bytes 00:07:51.063 NVM Subsystem Reset: Not Supported 00:07:51.063 Command Sets Supported 00:07:51.063 NVM Command Set: Supported 00:07:51.063 Boot Partition: Not Supported 00:07:51.063 Memory Page Size Minimum: 4096 bytes 00:07:51.063 Memory Page Size Maximum: 65536 bytes 00:07:51.063 Persistent Memory Region: Not Supported 00:07:51.063 Optional Asynchronous Events Supported 00:07:51.063 Namespace Attribute Notices: Supported 00:07:51.063 Firmware Activation Notices: Not Supported 00:07:51.063 ANA Change Notices: Not Supported 00:07:51.063 PLE Aggregate Log Change Notices: Not Supported 00:07:51.063 LBA Status Info Alert Notices: Not Supported 00:07:51.063 EGE Aggregate Log Change Notices: Not Supported 00:07:51.064 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.064 Zone Descriptor Change Notices: Not Supported 00:07:51.064 Discovery Log Change Notices: Not Supported 00:07:51.064 Controller Attributes 00:07:51.064 128-bit Host Identifier: Not Supported 00:07:51.064 Non-Operational Permissive Mode: Not Supported 00:07:51.064 NVM Sets: Not Supported 00:07:51.064 Read Recovery Levels: Not Supported 00:07:51.064 Endurance Groups: Not Supported 00:07:51.064 Predictable Latency Mode: Not Supported 00:07:51.064 Traffic Based Keep ALive: Not Supported 00:07:51.064 Namespace Granularity: Not Supported 00:07:51.064 SQ Associations: Not Supported 00:07:51.064 UUID List: Not Supported 00:07:51.064 Multi-Domain Subsystem: Not Supported 00:07:51.064 Fixed Capacity Management: Not Supported 00:07:51.064 Variable Capacity Management: Not Supported 00:07:51.064 Delete Endurance Group: Not Supported 00:07:51.064 Delete NVM Set: Not Supported 00:07:51.064 Extended LBA Formats Supported: Supported 00:07:51.064 Flexible Data Placement Supported: Not Supported 00:07:51.064 00:07:51.064 Controller Memory Buffer Support 00:07:51.064 ================================ 00:07:51.064 Supported: No 00:07:51.064 00:07:51.064 Persistent Memory Region Support 00:07:51.064 ================================ 00:07:51.064 Supported: No 00:07:51.064 00:07:51.064 Admin Command Set Attributes 00:07:51.064 ============================ 00:07:51.064 Security Send/Receive: Not Supported 00:07:51.064 Format NVM: Supported 00:07:51.064 Firmware Activate/Download: Not Supported 00:07:51.064 Namespace Management: Supported 00:07:51.064 Device Self-Test: Not Supported 00:07:51.064 Directives: Supported 00:07:51.064 NVMe-MI: Not Supported 00:07:51.064 Virtualization Management: Not Supported 00:07:51.064 Doorbell Buffer Config: Supported 00:07:51.064 Get LBA Status Capability: Not Supported 00:07:51.064 Command & Feature Lockdown Capability: Not Supported 00:07:51.064 Abort Command Limit: 4 00:07:51.064 Async Event Request Limit: 4 00:07:51.064 Number of Firmware Slots: N/A 00:07:51.064 Firmware Slot 1 Read-Only: N/A 00:07:51.064 Firmware Activation Without Reset: N/A 00:07:51.064 Multiple Update Detection Support: N/A 00:07:51.064 Firmware Update Granularity: No Information Provided 00:07:51.064 Per-Namespace SMART Log: Yes 00:07:51.064 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.064 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:51.064 Command Effects Log Page: Supported 00:07:51.064 Get Log Page Extended Data: Supported 00:07:51.064 Telemetry Log Pages: Not Supported 00:07:51.064 Persistent Event Log Pages: Not Supported 00:07:51.064 Supported Log Pages Log Page: May Support 00:07:51.064 Commands Supported & Effects Log Page: Not Supported 00:07:51.064 Feature Identifiers & Effects Log Page:May Support 00:07:51.064 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.064 Data Area 4 for Telemetry Log: Not Supported 00:07:51.064 Error Log Page Entries Supported: 1 00:07:51.064 Keep Alive: Not Supported 00:07:51.064 00:07:51.064 NVM Command Set Attributes 00:07:51.064 ========================== 00:07:51.064 Submission Queue Entry Size 00:07:51.064 Max: 64 00:07:51.064 Min: 64 00:07:51.064 Completion Queue Entry Size 00:07:51.064 Max: 16 00:07:51.064 Min: 16 00:07:51.064 Number of Namespaces: 256 00:07:51.064 Compare Command: Supported 00:07:51.064 Write Uncorrectable Command: Not Supported 00:07:51.064 Dataset Management Command: Supported 00:07:51.064 Write Zeroes Command: Supported 00:07:51.064 Set Features Save Field: Supported 00:07:51.064 Reservations: Not Supported 00:07:51.064 Timestamp: Supported 00:07:51.064 Copy: Supported 00:07:51.064 Volatile Write Cache: Present 00:07:51.064 Atomic Write Unit (Normal): 1 00:07:51.064 Atomic Write Unit (PFail): 1 00:07:51.064 Atomic Compare & Write Unit: 1 00:07:51.064 Fused Compare & Write: Not Supported 00:07:51.064 Scatter-Gather List 00:07:51.064 SGL Command Set: Supported 00:07:51.064 SGL Keyed: Not Supported 00:07:51.064 SGL Bit Bucket Descriptor: Not Supported 00:07:51.064 SGL Metadata Pointer: Not Supported 00:07:51.064 Oversized SGL: Not Supported 00:07:51.064 SGL Metadata Address: Not Supported 00:07:51.064 SGL Offset: Not Supported 00:07:51.064 Transport SGL Data Block: Not Supported 00:07:51.064 Replay Protected Memory Block: Not Supported 00:07:51.064 00:07:51.064 Firmware Slot Information 00:07:51.064 ========================= 00:07:51.064 Active slot: 1 00:07:51.064 Slot 1 Firmware Revision: 1.0 00:07:51.064 00:07:51.064 00:07:51.064 Commands Supported and Effects 00:07:51.064 ============================== 00:07:51.064 Admin Commands 00:07:51.064 -------------- 00:07:51.064 Delete I/O Submission Queue (00h): Supported 00:07:51.064 Create I/O Submission Queue (01h): Supported 00:07:51.064 Get Log Page (02h): Supported 00:07:51.064 Delete I/O Completion Queue (04h): Supported 00:07:51.064 Create I/O Completion Queue (05h): Supported 00:07:51.064 Identify (06h): Supported 00:07:51.064 Abort (08h): Supported 00:07:51.064 Set Features (09h): Supported 00:07:51.064 Get Features (0Ah): Supported 00:07:51.064 Asynchronous Event Request (0Ch): Supported 00:07:51.064 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.064 Directive Send (19h): Supported 00:07:51.064 Directive Receive (1Ah): Supported 00:07:51.064 Virtualization Management (1Ch): Supported 00:07:51.064 Doorbell Buffer Config (7Ch): Supported 00:07:51.064 Format NVM (80h): Supported LBA-Change 00:07:51.064 I/O Commands 00:07:51.064 ------------ 00:07:51.064 Flush (00h): Supported LBA-Change 00:07:51.064 Write (01h): Supported LBA-Change 00:07:51.064 Read (02h): Supported 00:07:51.064 Compare (05h): Supported 00:07:51.064 Write Zeroes (08h): Supported LBA-Change 00:07:51.064 Dataset Management (09h): Supported LBA-Change 00:07:51.064 Unknown (0Ch): Supported 00:07:51.064 Unknown (12h): Supported 00:07:51.064 Copy (19h): Supported LBA-Change 00:07:51.064 Unknown (1Dh): Supported LBA-Change 00:07:51.064 00:07:51.064 Error Log 00:07:51.064 ========= 00:07:51.064 00:07:51.064 Arbitration 00:07:51.064 =========== 00:07:51.064 Arbitration Burst: no limit 00:07:51.064 00:07:51.065 Power Management 00:07:51.065 ================ 00:07:51.065 Number of Power States: 1 00:07:51.065 Current Power State: Power State #0 00:07:51.065 Power State #0: 00:07:51.065 Max Power: 25.00 W 00:07:51.065 Non-Operational State: Operational 00:07:51.065 Entry Latency: 16 microseconds 00:07:51.065 Exit Latency: 4 microseconds 00:07:51.065 Relative Read Throughput: 0 00:07:51.065 Relative Read Latency: 0 00:07:51.065 Relative Write Throughput: 0 00:07:51.065 Relative Write Latency: 0 00:07:51.065 Idle Power: Not Reported 00:07:51.065 Active Power: Not Reported 00:07:51.065 Non-Operational Permissive Mode: Not Supported 00:07:51.065 00:07:51.065 Health Information 00:07:51.065 ================== 00:07:51.065 Critical Warnings: 00:07:51.065 Available Spare Space: OK 00:07:51.065 Temperature: [2024-11-20 17:38:14.586637] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63020 terminated unexpected 00:07:51.065 OK 00:07:51.065 Device Reliability: OK 00:07:51.065 Read Only: No 00:07:51.065 Volatile Memory Backup: OK 00:07:51.065 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.065 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.065 Available Spare: 0% 00:07:51.065 Available Spare Threshold: 0% 00:07:51.065 Life Percentage Used: 0% 00:07:51.065 Data Units Read: 942 00:07:51.065 Data Units Written: 815 00:07:51.065 Host Read Commands: 48945 00:07:51.065 Host Write Commands: 47837 00:07:51.065 Controller Busy Time: 0 minutes 00:07:51.065 Power Cycles: 0 00:07:51.065 Power On Hours: 0 hours 00:07:51.065 Unsafe Shutdowns: 0 00:07:51.065 Unrecoverable Media Errors: 0 00:07:51.065 Lifetime Error Log Entries: 0 00:07:51.065 Warning Temperature Time: 0 minutes 00:07:51.065 Critical Temperature Time: 0 minutes 00:07:51.065 00:07:51.065 Number of Queues 00:07:51.065 ================ 00:07:51.065 Number of I/O Submission Queues: 64 00:07:51.065 Number of I/O Completion Queues: 64 00:07:51.065 00:07:51.065 ZNS Specific Controller Data 00:07:51.065 ============================ 00:07:51.065 Zone Append Size Limit: 0 00:07:51.065 00:07:51.065 00:07:51.065 Active Namespaces 00:07:51.065 ================= 00:07:51.065 Namespace ID:1 00:07:51.065 Error Recovery Timeout: Unlimited 00:07:51.065 Command Set Identifier: NVM (00h) 00:07:51.065 Deallocate: Supported 00:07:51.065 Deallocated/Unwritten Error: Supported 00:07:51.065 Deallocated Read Value: All 0x00 00:07:51.065 Deallocate in Write Zeroes: Not Supported 00:07:51.065 Deallocated Guard Field: 0xFFFF 00:07:51.065 Flush: Supported 00:07:51.065 Reservation: Not Supported 00:07:51.065 Namespace Sharing Capabilities: Private 00:07:51.065 Size (in LBAs): 1310720 (5GiB) 00:07:51.065 Capacity (in LBAs): 1310720 (5GiB) 00:07:51.065 Utilization (in LBAs): 1310720 (5GiB) 00:07:51.065 Thin Provisioning: Not Supported 00:07:51.065 Per-NS Atomic Units: No 00:07:51.065 Maximum Single Source Range Length: 128 00:07:51.065 Maximum Copy Length: 128 00:07:51.065 Maximum Source Range Count: 128 00:07:51.065 NGUID/EUI64 Never Reused: No 00:07:51.065 Namespace Write Protected: No 00:07:51.065 Number of LBA Formats: 8 00:07:51.065 Current LBA Format: LBA Format #04 00:07:51.065 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.065 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.065 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.065 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.065 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.065 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.065 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.065 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.065 00:07:51.065 NVM Specific Namespace Data 00:07:51.065 =========================== 00:07:51.065 Logical Block Storage Tag Mask: 0 00:07:51.065 Protection Information Capabilities: 00:07:51.065 16b Guard Protection Information Storage Tag Support: No 00:07:51.065 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.065 Storage Tag Check Read Support: No 00:07:51.065 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.065 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.065 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.065 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.065 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.065 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.065 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.065 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.065 ===================================================== 00:07:51.065 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.065 ===================================================== 00:07:51.065 Controller Capabilities/Features 00:07:51.065 ================================ 00:07:51.065 Vendor ID: 1b36 00:07:51.065 Subsystem Vendor ID: 1af4 00:07:51.065 Serial Number: 12343 00:07:51.065 Model Number: QEMU NVMe Ctrl 00:07:51.065 Firmware Version: 8.0.0 00:07:51.065 Recommended Arb Burst: 6 00:07:51.065 IEEE OUI Identifier: 00 54 52 00:07:51.065 Multi-path I/O 00:07:51.065 May have multiple subsystem ports: No 00:07:51.065 May have multiple controllers: Yes 00:07:51.065 Associated with SR-IOV VF: No 00:07:51.065 Max Data Transfer Size: 524288 00:07:51.065 Max Number of Namespaces: 256 00:07:51.065 Max Number of I/O Queues: 64 00:07:51.065 NVMe Specification Version (VS): 1.4 00:07:51.065 NVMe Specification Version (Identify): 1.4 00:07:51.065 Maximum Queue Entries: 2048 00:07:51.065 Contiguous Queues Required: Yes 00:07:51.065 Arbitration Mechanisms Supported 00:07:51.065 Weighted Round Robin: Not Supported 00:07:51.065 Vendor Specific: Not Supported 00:07:51.065 Reset Timeout: 7500 ms 00:07:51.065 Doorbell Stride: 4 bytes 00:07:51.066 NVM Subsystem Reset: Not Supported 00:07:51.066 Command Sets Supported 00:07:51.066 NVM Command Set: Supported 00:07:51.066 Boot Partition: Not Supported 00:07:51.066 Memory Page Size Minimum: 4096 bytes 00:07:51.066 Memory Page Size Maximum: 65536 bytes 00:07:51.066 Persistent Memory Region: Not Supported 00:07:51.066 Optional Asynchronous Events Supported 00:07:51.066 Namespace Attribute Notices: Supported 00:07:51.066 Firmware Activation Notices: Not Supported 00:07:51.066 ANA Change Notices: Not Supported 00:07:51.066 PLE Aggregate Log Change Notices: Not Supported 00:07:51.066 LBA Status Info Alert Notices: Not Supported 00:07:51.066 EGE Aggregate Log Change Notices: Not Supported 00:07:51.066 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.066 Zone Descriptor Change Notices: Not Supported 00:07:51.066 Discovery Log Change Notices: Not Supported 00:07:51.066 Controller Attributes 00:07:51.066 128-bit Host Identifier: Not Supported 00:07:51.066 Non-Operational Permissive Mode: Not Supported 00:07:51.066 NVM Sets: Not Supported 00:07:51.066 Read Recovery Levels: Not Supported 00:07:51.066 Endurance Groups: Supported 00:07:51.066 Predictable Latency Mode: Not Supported 00:07:51.066 Traffic Based Keep ALive: Not Supported 00:07:51.066 Namespace Granularity: Not Supported 00:07:51.066 SQ Associations: Not Supported 00:07:51.066 UUID List: Not Supported 00:07:51.066 Multi-Domain Subsystem: Not Supported 00:07:51.066 Fixed Capacity Management: Not Supported 00:07:51.066 Variable Capacity Management: Not Supported 00:07:51.066 Delete Endurance Group: Not Supported 00:07:51.066 Delete NVM Set: Not Supported 00:07:51.066 Extended LBA Formats Supported: Supported 00:07:51.066 Flexible Data Placement Supported: Supported 00:07:51.066 00:07:51.066 Controller Memory Buffer Support 00:07:51.066 ================================ 00:07:51.066 Supported: No 00:07:51.066 00:07:51.066 Persistent Memory Region Support 00:07:51.066 ================================ 00:07:51.066 Supported: No 00:07:51.066 00:07:51.066 Admin Command Set Attributes 00:07:51.066 ============================ 00:07:51.066 Security Send/Receive: Not Supported 00:07:51.066 Format NVM: Supported 00:07:51.066 Firmware Activate/Download: Not Supported 00:07:51.066 Namespace Management: Supported 00:07:51.066 Device Self-Test: Not Supported 00:07:51.066 Directives: Supported 00:07:51.066 NVMe-MI: Not Supported 00:07:51.066 Virtualization Management: Not Supported 00:07:51.066 Doorbell Buffer Config: Supported 00:07:51.066 Get LBA Status Capability: Not Supported 00:07:51.066 Command & Feature Lockdown Capability: Not Supported 00:07:51.066 Abort Command Limit: 4 00:07:51.066 Async Event Request Limit: 4 00:07:51.066 Number of Firmware Slots: N/A 00:07:51.066 Firmware Slot 1 Read-Only: N/A 00:07:51.066 Firmware Activation Without Reset: N/A 00:07:51.066 Multiple Update Detection Support: N/A 00:07:51.066 Firmware Update Granularity: No Information Provided 00:07:51.066 Per-Namespace SMART Log: Yes 00:07:51.066 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.066 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:51.066 Command Effects Log Page: Supported 00:07:51.066 Get Log Page Extended Data: Supported 00:07:51.066 Telemetry Log Pages: Not Supported 00:07:51.066 Persistent Event Log Pages: Not Supported 00:07:51.066 Supported Log Pages Log Page: May Support 00:07:51.066 Commands Supported & Effects Log Page: Not Supported 00:07:51.066 Feature Identifiers & Effects Log Page:May Support 00:07:51.066 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.066 Data Area 4 for Telemetry Log: Not Supported 00:07:51.066 Error Log Page Entries Supported: 1 00:07:51.066 Keep Alive: Not Supported 00:07:51.066 00:07:51.066 NVM Command Set Attributes 00:07:51.066 ========================== 00:07:51.066 Submission Queue Entry Size 00:07:51.066 Max: 64 00:07:51.066 Min: 64 00:07:51.066 Completion Queue Entry Size 00:07:51.066 Max: 16 00:07:51.066 Min: 16 00:07:51.066 Number of Namespaces: 256 00:07:51.066 Compare Command: Supported 00:07:51.066 Write Uncorrectable Command: Not Supported 00:07:51.066 Dataset Management Command: Supported 00:07:51.066 Write Zeroes Command: Supported 00:07:51.066 Set Features Save Field: Supported 00:07:51.066 Reservations: Not Supported 00:07:51.066 Timestamp: Supported 00:07:51.066 Copy: Supported 00:07:51.066 Volatile Write Cache: Present 00:07:51.066 Atomic Write Unit (Normal): 1 00:07:51.066 Atomic Write Unit (PFail): 1 00:07:51.066 Atomic Compare & Write Unit: 1 00:07:51.066 Fused Compare & Write: Not Supported 00:07:51.066 Scatter-Gather List 00:07:51.066 SGL Command Set: Supported 00:07:51.066 SGL Keyed: Not Supported 00:07:51.066 SGL Bit Bucket Descriptor: Not Supported 00:07:51.066 SGL Metadata Pointer: Not Supported 00:07:51.066 Oversized SGL: Not Supported 00:07:51.066 SGL Metadata Address: Not Supported 00:07:51.066 SGL Offset: Not Supported 00:07:51.066 Transport SGL Data Block: Not Supported 00:07:51.066 Replay Protected Memory Block: Not Supported 00:07:51.066 00:07:51.066 Firmware Slot Information 00:07:51.066 ========================= 00:07:51.066 Active slot: 1 00:07:51.066 Slot 1 Firmware Revision: 1.0 00:07:51.066 00:07:51.066 00:07:51.066 Commands Supported and Effects 00:07:51.066 ============================== 00:07:51.066 Admin Commands 00:07:51.066 -------------- 00:07:51.066 Delete I/O Submission Queue (00h): Supported 00:07:51.066 Create I/O Submission Queue (01h): Supported 00:07:51.066 Get Log Page (02h): Supported 00:07:51.066 Delete I/O Completion Queue (04h): Supported 00:07:51.066 Create I/O Completion Queue (05h): Supported 00:07:51.066 Identify (06h): Supported 00:07:51.066 Abort (08h): Supported 00:07:51.066 Set Features (09h): Supported 00:07:51.066 Get Features (0Ah): Supported 00:07:51.066 Asynchronous Event Request (0Ch): Supported 00:07:51.066 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.066 Directive Send (19h): Supported 00:07:51.066 Directive Receive (1Ah): Supported 00:07:51.066 Virtualization Management (1Ch): Supported 00:07:51.066 Doorbell Buffer Config (7Ch): Supported 00:07:51.066 Format NVM (80h): Supported LBA-Change 00:07:51.066 I/O Commands 00:07:51.066 ------------ 00:07:51.066 Flush (00h): Supported LBA-Change 00:07:51.066 Write (01h): Supported LBA-Change 00:07:51.066 Read (02h): Supported 00:07:51.066 Compare (05h): Supported 00:07:51.066 Write Zeroes (08h): Supported LBA-Change 00:07:51.066 Dataset Management (09h): Supported LBA-Change 00:07:51.066 Unknown (0Ch): Supported 00:07:51.066 Unknown (12h): Supported 00:07:51.066 Copy (19h): Supported LBA-Change 00:07:51.066 Unknown (1Dh): Supported LBA-Change 00:07:51.066 00:07:51.066 Error Log 00:07:51.066 ========= 00:07:51.066 00:07:51.067 Arbitration 00:07:51.067 =========== 00:07:51.067 Arbitration Burst: no limit 00:07:51.067 00:07:51.067 Power Management 00:07:51.067 ================ 00:07:51.067 Number of Power States: 1 00:07:51.067 Current Power State: Power State #0 00:07:51.067 Power State #0: 00:07:51.067 Max Power: 25.00 W 00:07:51.067 Non-Operational State: Operational 00:07:51.067 Entry Latency: 16 microseconds 00:07:51.067 Exit Latency: 4 microseconds 00:07:51.067 Relative Read Throughput: 0 00:07:51.067 Relative Read Latency: 0 00:07:51.067 Relative Write Throughput: 0 00:07:51.067 Relative Write Latency: 0 00:07:51.067 Idle Power: Not Reported 00:07:51.067 Active Power: Not Reported 00:07:51.067 Non-Operational Permissive Mode: Not Supported 00:07:51.067 00:07:51.067 Health Information 00:07:51.067 ================== 00:07:51.067 Critical Warnings: 00:07:51.067 Available Spare Space: OK 00:07:51.067 Temperature: OK 00:07:51.067 Device Reliability: OK 00:07:51.067 Read Only: No 00:07:51.067 Volatile Memory Backup: OK 00:07:51.067 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.067 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.067 Available Spare: 0% 00:07:51.067 Available Spare Threshold: 0% 00:07:51.067 Life Percentage Used: 0% 00:07:51.067 Data Units Read: 798 00:07:51.067 Data Units Written: 727 00:07:51.067 Host Read Commands: 34933 00:07:51.067 Host Write Commands: 34356 00:07:51.067 Controller Busy Time: 0 minutes 00:07:51.067 Power Cycles: 0 00:07:51.067 Power On Hours: 0 hours 00:07:51.067 Unsafe Shutdowns: 0 00:07:51.067 Unrecoverable Media Errors: 0 00:07:51.067 Lifetime Error Log Entries: 0 00:07:51.067 Warning Temperature Time: 0 minutes 00:07:51.067 Critical Temperature Time: 0 minutes 00:07:51.067 00:07:51.067 Number of Queues 00:07:51.067 ================ 00:07:51.067 Number of I/O Submission Queues: 64 00:07:51.067 Number of I/O Completion Queues: 64 00:07:51.067 00:07:51.067 ZNS Specific Controller Data 00:07:51.067 ============================ 00:07:51.067 Zone Append Size Limit: 0 00:07:51.067 00:07:51.067 00:07:51.067 Active Namespaces 00:07:51.067 ================= 00:07:51.067 Namespace ID:1 00:07:51.067 Error Recovery Timeout: Unlimited 00:07:51.067 Command Set Identifier: NVM (00h) 00:07:51.067 Deallocate: Supported 00:07:51.067 Deallocated/Unwritten Error: Supported 00:07:51.067 Deallocated Read Value: All 0x00 00:07:51.067 Deallocate in Write Zeroes: Not Supported 00:07:51.067 Deallocated Guard Field: 0xFFFF 00:07:51.067 Flush: Supported 00:07:51.067 Reservation: Not Supported 00:07:51.067 Namespace Sharing Capabilities: Multiple Controllers 00:07:51.067 Size (in LBAs): 262144 (1GiB) 00:07:51.067 Capacity (in LBAs): 262144 (1GiB) 00:07:51.067 Utilization (in LBAs): 262144 (1GiB) 00:07:51.067 Thin Provisioning: Not Supported 00:07:51.067 Per-NS Atomic Units: No 00:07:51.067 Maximum Single Source Range Length: 128 00:07:51.067 Maximum Copy Length: 128 00:07:51.067 Maximum Source Range Count: 128 00:07:51.067 NGUID/EUI64 Never Reused: No 00:07:51.067 Namespace Write Protected: No 00:07:51.067 Endurance group ID: 1 00:07:51.067 Number of LBA Formats: 8 00:07:51.067 Current LBA Format: LBA Format #04 00:07:51.067 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.067 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.067 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.067 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.067 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.067 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.067 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.067 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.067 00:07:51.067 Get Feature FDP: 00:07:51.067 ================ 00:07:51.067 Enabled: Yes 00:07:51.067 FDP configuration index: 0 00:07:51.067 00:07:51.067 FDP configurations log page 00:07:51.067 =========================== 00:07:51.067 Number of FDP configurations: 1 00:07:51.067 Version: 0 00:07:51.067 Size: 112 00:07:51.067 FDP Configuration Descriptor: 0 00:07:51.067 Descriptor Size: 96 00:07:51.067 Reclaim Group Identifier format: 2 00:07:51.067 FDP Volatile Write Cache: Not Present 00:07:51.067 FDP Configuration: Valid 00:07:51.067 Vendor Specific Size: 0 00:07:51.067 Number of Reclaim Groups: 2 00:07:51.067 Number of Recalim Unit Handles: 8 00:07:51.067 Max Placement Identifiers: 128 00:07:51.067 Number of Namespaces Suppprted: 256 00:07:51.067 Reclaim unit Nominal Size: 6000000 bytes 00:07:51.067 Estimated Reclaim Unit Time Limit: Not Reported 00:07:51.067 RUH Desc #000: RUH Type: Initially Isolated 00:07:51.067 RUH Desc #001: RUH Type: Initially Isolated 00:07:51.067 RUH Desc #002: RUH Type: Initially Isolated 00:07:51.067 RUH Desc #003: RUH Type: Initially Isolated 00:07:51.067 RUH Desc #004: RUH Type: Initially Isolated 00:07:51.067 RUH Desc #005: RUH Type: Initially Isolated 00:07:51.067 RUH Desc #006: RUH Type: Initially Isolated 00:07:51.067 RUH Desc #007: RUH Type: Initially Isolated 00:07:51.067 00:07:51.067 FDP reclaim unit handle usage log page 00:07:51.067 ====================================== 00:07:51.067 Number of Reclaim Unit Handles: 8 00:07:51.067 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:51.067 RUH Usage Desc #001: RUH Attributes: Unused 00:07:51.067 RUH Usage Desc #002: RUH Attributes: Unused 00:07:51.067 RUH Usage Desc #003: RUH Attributes: Unused 00:07:51.067 RUH Usage Desc #004: RUH Attributes: Unused 00:07:51.067 RUH Usage Desc #005: RUH Attributes: Unused 00:07:51.067 RUH Usage Desc #006: RUH Attributes: Unused 00:07:51.067 RUH Usage Desc #007: RUH Attributes: Unused 00:07:51.067 00:07:51.067 FDP statistics log page 00:07:51.067 ======================= 00:07:51.067 Host bytes with metadata written: 402235392 00:07:51.067 Media[2024-11-20 17:38:14.588114] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63020 terminated unexpected 00:07:51.067 bytes with metadata written: 402276352 00:07:51.067 Media bytes erased: 0 00:07:51.067 00:07:51.068 FDP events log page 00:07:51.068 =================== 00:07:51.068 Number of FDP events: 0 00:07:51.068 00:07:51.068 NVM Specific Namespace Data 00:07:51.068 =========================== 00:07:51.068 Logical Block Storage Tag Mask: 0 00:07:51.068 Protection Information Capabilities: 00:07:51.068 16b Guard Protection Information Storage Tag Support: No 00:07:51.068 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.068 Storage Tag Check Read Support: No 00:07:51.068 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.068 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.068 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.068 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.068 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.068 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.068 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.068 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.068 ===================================================== 00:07:51.068 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.068 ===================================================== 00:07:51.068 Controller Capabilities/Features 00:07:51.068 ================================ 00:07:51.068 Vendor ID: 1b36 00:07:51.068 Subsystem Vendor ID: 1af4 00:07:51.068 Serial Number: 12342 00:07:51.068 Model Number: QEMU NVMe Ctrl 00:07:51.068 Firmware Version: 8.0.0 00:07:51.068 Recommended Arb Burst: 6 00:07:51.068 IEEE OUI Identifier: 00 54 52 00:07:51.068 Multi-path I/O 00:07:51.068 May have multiple subsystem ports: No 00:07:51.068 May have multiple controllers: No 00:07:51.068 Associated with SR-IOV VF: No 00:07:51.068 Max Data Transfer Size: 524288 00:07:51.068 Max Number of Namespaces: 256 00:07:51.068 Max Number of I/O Queues: 64 00:07:51.068 NVMe Specification Version (VS): 1.4 00:07:51.068 NVMe Specification Version (Identify): 1.4 00:07:51.068 Maximum Queue Entries: 2048 00:07:51.068 Contiguous Queues Required: Yes 00:07:51.068 Arbitration Mechanisms Supported 00:07:51.068 Weighted Round Robin: Not Supported 00:07:51.068 Vendor Specific: Not Supported 00:07:51.068 Reset Timeout: 7500 ms 00:07:51.068 Doorbell Stride: 4 bytes 00:07:51.068 NVM Subsystem Reset: Not Supported 00:07:51.068 Command Sets Supported 00:07:51.068 NVM Command Set: Supported 00:07:51.068 Boot Partition: Not Supported 00:07:51.068 Memory Page Size Minimum: 4096 bytes 00:07:51.068 Memory Page Size Maximum: 65536 bytes 00:07:51.068 Persistent Memory Region: Not Supported 00:07:51.068 Optional Asynchronous Events Supported 00:07:51.068 Namespace Attribute Notices: Supported 00:07:51.068 Firmware Activation Notices: Not Supported 00:07:51.068 ANA Change Notices: Not Supported 00:07:51.068 PLE Aggregate Log Change Notices: Not Supported 00:07:51.068 LBA Status Info Alert Notices: Not Supported 00:07:51.068 EGE Aggregate Log Change Notices: Not Supported 00:07:51.068 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.068 Zone Descriptor Change Notices: Not Supported 00:07:51.068 Discovery Log Change Notices: Not Supported 00:07:51.068 Controller Attributes 00:07:51.068 128-bit Host Identifier: Not Supported 00:07:51.068 Non-Operational Permissive Mode: Not Supported 00:07:51.068 NVM Sets: Not Supported 00:07:51.068 Read Recovery Levels: Not Supported 00:07:51.068 Endurance Groups: Not Supported 00:07:51.068 Predictable Latency Mode: Not Supported 00:07:51.068 Traffic Based Keep ALive: Not Supported 00:07:51.068 Namespace Granularity: Not Supported 00:07:51.068 SQ Associations: Not Supported 00:07:51.068 UUID List: Not Supported 00:07:51.068 Multi-Domain Subsystem: Not Supported 00:07:51.068 Fixed Capacity Management: Not Supported 00:07:51.068 Variable Capacity Management: Not Supported 00:07:51.068 Delete Endurance Group: Not Supported 00:07:51.068 Delete NVM Set: Not Supported 00:07:51.068 Extended LBA Formats Supported: Supported 00:07:51.068 Flexible Data Placement Supported: Not Supported 00:07:51.068 00:07:51.068 Controller Memory Buffer Support 00:07:51.068 ================================ 00:07:51.068 Supported: No 00:07:51.068 00:07:51.068 Persistent Memory Region Support 00:07:51.068 ================================ 00:07:51.068 Supported: No 00:07:51.068 00:07:51.068 Admin Command Set Attributes 00:07:51.068 ============================ 00:07:51.068 Security Send/Receive: Not Supported 00:07:51.068 Format NVM: Supported 00:07:51.068 Firmware Activate/Download: Not Supported 00:07:51.068 Namespace Management: Supported 00:07:51.068 Device Self-Test: Not Supported 00:07:51.069 Directives: Supported 00:07:51.069 NVMe-MI: Not Supported 00:07:51.069 Virtualization Management: Not Supported 00:07:51.069 Doorbell Buffer Config: Supported 00:07:51.069 Get LBA Status Capability: Not Supported 00:07:51.069 Command & Feature Lockdown Capability: Not Supported 00:07:51.069 Abort Command Limit: 4 00:07:51.069 Async Event Request Limit: 4 00:07:51.069 Number of Firmware Slots: N/A 00:07:51.069 Firmware Slot 1 Read-Only: N/A 00:07:51.069 Firmware Activation Without Reset: N/A 00:07:51.069 Multiple Update Detection Support: N/A 00:07:51.069 Firmware Update Granularity: No Information Provided 00:07:51.069 Per-Namespace SMART Log: Yes 00:07:51.069 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.069 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:51.069 Command Effects Log Page: Supported 00:07:51.069 Get Log Page Extended Data: Supported 00:07:51.069 Telemetry Log Pages: Not Supported 00:07:51.069 Persistent Event Log Pages: Not Supported 00:07:51.069 Supported Log Pages Log Page: May Support 00:07:51.069 Commands Supported & Effects Log Page: Not Supported 00:07:51.069 Feature Identifiers & Effects Log Page:May Support 00:07:51.069 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.069 Data Area 4 for Telemetry Log: Not Supported 00:07:51.069 Error Log Page Entries Supported: 1 00:07:51.069 Keep Alive: Not Supported 00:07:51.069 00:07:51.069 NVM Command Set Attributes 00:07:51.069 ========================== 00:07:51.069 Submission Queue Entry Size 00:07:51.069 Max: 64 00:07:51.069 Min: 64 00:07:51.069 Completion Queue Entry Size 00:07:51.069 Max: 16 00:07:51.069 Min: 16 00:07:51.069 Number of Namespaces: 256 00:07:51.069 Compare Command: Supported 00:07:51.069 Write Uncorrectable Command: Not Supported 00:07:51.069 Dataset Management Command: Supported 00:07:51.069 Write Zeroes Command: Supported 00:07:51.069 Set Features Save Field: Supported 00:07:51.069 Reservations: Not Supported 00:07:51.069 Timestamp: Supported 00:07:51.069 Copy: Supported 00:07:51.069 Volatile Write Cache: Present 00:07:51.069 Atomic Write Unit (Normal): 1 00:07:51.069 Atomic Write Unit (PFail): 1 00:07:51.069 Atomic Compare & Write Unit: 1 00:07:51.069 Fused Compare & Write: Not Supported 00:07:51.069 Scatter-Gather List 00:07:51.069 SGL Command Set: Supported 00:07:51.069 SGL Keyed: Not Supported 00:07:51.069 SGL Bit Bucket Descriptor: Not Supported 00:07:51.069 SGL Metadata Pointer: Not Supported 00:07:51.069 Oversized SGL: Not Supported 00:07:51.069 SGL Metadata Address: Not Supported 00:07:51.069 SGL Offset: Not Supported 00:07:51.069 Transport SGL Data Block: Not Supported 00:07:51.069 Replay Protected Memory Block: Not Supported 00:07:51.069 00:07:51.069 Firmware Slot Information 00:07:51.069 ========================= 00:07:51.069 Active slot: 1 00:07:51.069 Slot 1 Firmware Revision: 1.0 00:07:51.069 00:07:51.069 00:07:51.069 Commands Supported and Effects 00:07:51.069 ============================== 00:07:51.069 Admin Commands 00:07:51.069 -------------- 00:07:51.069 Delete I/O Submission Queue (00h): Supported 00:07:51.069 Create I/O Submission Queue (01h): Supported 00:07:51.069 Get Log Page (02h): Supported 00:07:51.069 Delete I/O Completion Queue (04h): Supported 00:07:51.069 Create I/O Completion Queue (05h): Supported 00:07:51.069 Identify (06h): Supported 00:07:51.069 Abort (08h): Supported 00:07:51.069 Set Features (09h): Supported 00:07:51.069 Get Features (0Ah): Supported 00:07:51.069 Asynchronous Event Request (0Ch): Supported 00:07:51.069 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.069 Directive Send (19h): Supported 00:07:51.069 Directive Receive (1Ah): Supported 00:07:51.069 Virtualization Management (1Ch): Supported 00:07:51.069 Doorbell Buffer Config (7Ch): Supported 00:07:51.069 Format NVM (80h): Supported LBA-Change 00:07:51.069 I/O Commands 00:07:51.069 ------------ 00:07:51.069 Flush (00h): Supported LBA-Change 00:07:51.069 Write (01h): Supported LBA-Change 00:07:51.069 Read (02h): Supported 00:07:51.069 Compare (05h): Supported 00:07:51.069 Write Zeroes (08h): Supported LBA-Change 00:07:51.069 Dataset Management (09h): Supported LBA-Change 00:07:51.069 Unknown (0Ch): Supported 00:07:51.069 Unknown (12h): Supported 00:07:51.069 Copy (19h): Supported LBA-Change 00:07:51.069 Unknown (1Dh): Supported LBA-Change 00:07:51.069 00:07:51.069 Error Log 00:07:51.069 ========= 00:07:51.069 00:07:51.069 Arbitration 00:07:51.069 =========== 00:07:51.069 Arbitration Burst: no limit 00:07:51.069 00:07:51.069 Power Management 00:07:51.069 ================ 00:07:51.069 Number of Power States: 1 00:07:51.069 Current Power State: Power State #0 00:07:51.069 Power State #0: 00:07:51.069 Max Power: 25.00 W 00:07:51.069 Non-Operational State: Operational 00:07:51.069 Entry Latency: 16 microseconds 00:07:51.069 Exit Latency: 4 microseconds 00:07:51.069 Relative Read Throughput: 0 00:07:51.069 Relative Read Latency: 0 00:07:51.069 Relative Write Throughput: 0 00:07:51.069 Relative Write Latency: 0 00:07:51.069 Idle Power: Not Reported 00:07:51.069 Active Power: Not Reported 00:07:51.069 Non-Operational Permissive Mode: Not Supported 00:07:51.069 00:07:51.069 Health Information 00:07:51.069 ================== 00:07:51.069 Critical Warnings: 00:07:51.069 Available Spare Space: OK 00:07:51.069 Temperature: OK 00:07:51.069 Device Reliability: OK 00:07:51.069 Read Only: No 00:07:51.069 Volatile Memory Backup: OK 00:07:51.069 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.069 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.069 Available Spare: 0% 00:07:51.069 Available Spare Threshold: 0% 00:07:51.069 Life Percentage Used: 0% 00:07:51.069 Data Units Read: 1993 00:07:51.069 Data Units Written: 1780 00:07:51.069 Host Read Commands: 101487 00:07:51.069 Host Write Commands: 99756 00:07:51.069 Controller Busy Time: 0 minutes 00:07:51.069 Power Cycles: 0 00:07:51.069 Power On Hours: 0 hours 00:07:51.069 Unsafe Shutdowns: 0 00:07:51.069 Unrecoverable Media Errors: 0 00:07:51.069 Lifetime Error Log Entries: 0 00:07:51.069 Warning Temperature Time: 0 minutes 00:07:51.069 Critical Temperature Time: 0 minutes 00:07:51.069 00:07:51.069 Number of Queues 00:07:51.069 ================ 00:07:51.069 Number of I/O Submission Queues: 64 00:07:51.069 Number of I/O Completion Queues: 64 00:07:51.069 00:07:51.069 ZNS Specific Controller Data 00:07:51.069 ============================ 00:07:51.069 Zone Append Size Limit: 0 00:07:51.070 00:07:51.070 00:07:51.070 Active Namespaces 00:07:51.070 ================= 00:07:51.070 Namespace ID:1 00:07:51.070 Error Recovery Timeout: Unlimited 00:07:51.070 Command Set Identifier: NVM (00h) 00:07:51.070 Deallocate: Supported 00:07:51.070 Deallocated/Unwritten Error: Supported 00:07:51.070 Deallocated Read Value: All 0x00 00:07:51.070 Deallocate in Write Zeroes: Not Supported 00:07:51.070 Deallocated Guard Field: 0xFFFF 00:07:51.070 Flush: Supported 00:07:51.070 Reservation: Not Supported 00:07:51.070 Namespace Sharing Capabilities: Private 00:07:51.070 Size (in LBAs): 1048576 (4GiB) 00:07:51.070 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.070 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.070 Thin Provisioning: Not Supported 00:07:51.070 Per-NS Atomic Units: No 00:07:51.070 Maximum Single Source Range Length: 128 00:07:51.070 Maximum Copy Length: 128 00:07:51.070 Maximum Source Range Count: 128 00:07:51.070 NGUID/EUI64 Never Reused: No 00:07:51.070 Namespace Write Protected: No 00:07:51.070 Number of LBA Formats: 8 00:07:51.070 Current LBA Format: LBA Format #04 00:07:51.070 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.070 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.070 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.070 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.070 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.070 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.070 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.070 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.070 00:07:51.070 NVM Specific Namespace Data 00:07:51.070 =========================== 00:07:51.070 Logical Block Storage Tag Mask: 0 00:07:51.070 Protection Information Capabilities: 00:07:51.070 16b Guard Protection Information Storage Tag Support: No 00:07:51.070 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.070 Storage Tag Check Read Support: No 00:07:51.070 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Namespace ID:2 00:07:51.070 Error Recovery Timeout: Unlimited 00:07:51.070 Command Set Identifier: NVM (00h) 00:07:51.070 Deallocate: Supported 00:07:51.070 Deallocated/Unwritten Error: Supported 00:07:51.070 Deallocated Read Value: All 0x00 00:07:51.070 Deallocate in Write Zeroes: Not Supported 00:07:51.070 Deallocated Guard Field: 0xFFFF 00:07:51.070 Flush: Supported 00:07:51.070 Reservation: Not Supported 00:07:51.070 Namespace Sharing Capabilities: Private 00:07:51.070 Size (in LBAs): 1048576 (4GiB) 00:07:51.070 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.070 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.070 Thin Provisioning: Not Supported 00:07:51.070 Per-NS Atomic Units: No 00:07:51.070 Maximum Single Source Range Length: 128 00:07:51.070 Maximum Copy Length: 128 00:07:51.070 Maximum Source Range Count: 128 00:07:51.070 NGUID/EUI64 Never Reused: No 00:07:51.070 Namespace Write Protected: No 00:07:51.070 Number of LBA Formats: 8 00:07:51.070 Current LBA Format: LBA Format #04 00:07:51.070 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.070 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.070 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.070 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.070 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.070 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.070 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.070 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.070 00:07:51.070 NVM Specific Namespace Data 00:07:51.070 =========================== 00:07:51.070 Logical Block Storage Tag Mask: 0 00:07:51.070 Protection Information Capabilities: 00:07:51.070 16b Guard Protection Information Storage Tag Support: No 00:07:51.070 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.070 Storage Tag Check Read Support: No 00:07:51.070 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.070 Namespace ID:3 00:07:51.070 Error Recovery Timeout: Unlimited 00:07:51.070 Command Set Identifier: NVM (00h) 00:07:51.070 Deallocate: Supported 00:07:51.070 Deallocated/Unwritten Error: Supported 00:07:51.070 Deallocated Read Value: All 0x00 00:07:51.070 Deallocate in Write Zeroes: Not Supported 00:07:51.070 Deallocated Guard Field: 0xFFFF 00:07:51.070 Flush: Supported 00:07:51.070 Reservation: Not Supported 00:07:51.070 Namespace Sharing Capabilities: Private 00:07:51.070 Size (in LBAs): 1048576 (4GiB) 00:07:51.331 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.331 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.331 Thin Provisioning: Not Supported 00:07:51.331 Per-NS Atomic Units: No 00:07:51.331 Maximum Single Source Range Length: 128 00:07:51.331 Maximum Copy Length: 128 00:07:51.331 Maximum Source Range Count: 128 00:07:51.331 NGUID/EUI64 Never Reused: No 00:07:51.331 Namespace Write Protected: No 00:07:51.331 Number of LBA Formats: 8 00:07:51.331 Current LBA Format: LBA Format #04 00:07:51.331 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.331 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.331 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.331 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.331 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.331 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.331 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.331 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.331 00:07:51.331 NVM Specific Namespace Data 00:07:51.331 =========================== 00:07:51.331 Logical Block Storage Tag Mask: 0 00:07:51.331 Protection Information Capabilities: 00:07:51.331 16b Guard Protection Information Storage Tag Support: No 00:07:51.331 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.331 Storage Tag Check Read Support: No 00:07:51.331 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.331 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.331 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.331 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.331 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.331 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.331 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.331 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.331 17:38:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:51.331 17:38:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:51.331 ===================================================== 00:07:51.331 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:51.331 ===================================================== 00:07:51.331 Controller Capabilities/Features 00:07:51.331 ================================ 00:07:51.331 Vendor ID: 1b36 00:07:51.331 Subsystem Vendor ID: 1af4 00:07:51.331 Serial Number: 12340 00:07:51.331 Model Number: QEMU NVMe Ctrl 00:07:51.331 Firmware Version: 8.0.0 00:07:51.331 Recommended Arb Burst: 6 00:07:51.331 IEEE OUI Identifier: 00 54 52 00:07:51.331 Multi-path I/O 00:07:51.331 May have multiple subsystem ports: No 00:07:51.331 May have multiple controllers: No 00:07:51.331 Associated with SR-IOV VF: No 00:07:51.331 Max Data Transfer Size: 524288 00:07:51.331 Max Number of Namespaces: 256 00:07:51.331 Max Number of I/O Queues: 64 00:07:51.331 NVMe Specification Version (VS): 1.4 00:07:51.331 NVMe Specification Version (Identify): 1.4 00:07:51.331 Maximum Queue Entries: 2048 00:07:51.331 Contiguous Queues Required: Yes 00:07:51.331 Arbitration Mechanisms Supported 00:07:51.331 Weighted Round Robin: Not Supported 00:07:51.331 Vendor Specific: Not Supported 00:07:51.331 Reset Timeout: 7500 ms 00:07:51.331 Doorbell Stride: 4 bytes 00:07:51.331 NVM Subsystem Reset: Not Supported 00:07:51.331 Command Sets Supported 00:07:51.331 NVM Command Set: Supported 00:07:51.331 Boot Partition: Not Supported 00:07:51.331 Memory Page Size Minimum: 4096 bytes 00:07:51.331 Memory Page Size Maximum: 65536 bytes 00:07:51.331 Persistent Memory Region: Not Supported 00:07:51.331 Optional Asynchronous Events Supported 00:07:51.331 Namespace Attribute Notices: Supported 00:07:51.331 Firmware Activation Notices: Not Supported 00:07:51.331 ANA Change Notices: Not Supported 00:07:51.331 PLE Aggregate Log Change Notices: Not Supported 00:07:51.331 LBA Status Info Alert Notices: Not Supported 00:07:51.331 EGE Aggregate Log Change Notices: Not Supported 00:07:51.331 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.331 Zone Descriptor Change Notices: Not Supported 00:07:51.331 Discovery Log Change Notices: Not Supported 00:07:51.331 Controller Attributes 00:07:51.331 128-bit Host Identifier: Not Supported 00:07:51.331 Non-Operational Permissive Mode: Not Supported 00:07:51.331 NVM Sets: Not Supported 00:07:51.331 Read Recovery Levels: Not Supported 00:07:51.331 Endurance Groups: Not Supported 00:07:51.331 Predictable Latency Mode: Not Supported 00:07:51.331 Traffic Based Keep ALive: Not Supported 00:07:51.331 Namespace Granularity: Not Supported 00:07:51.331 SQ Associations: Not Supported 00:07:51.331 UUID List: Not Supported 00:07:51.331 Multi-Domain Subsystem: Not Supported 00:07:51.331 Fixed Capacity Management: Not Supported 00:07:51.331 Variable Capacity Management: Not Supported 00:07:51.331 Delete Endurance Group: Not Supported 00:07:51.331 Delete NVM Set: Not Supported 00:07:51.331 Extended LBA Formats Supported: Supported 00:07:51.331 Flexible Data Placement Supported: Not Supported 00:07:51.331 00:07:51.331 Controller Memory Buffer Support 00:07:51.331 ================================ 00:07:51.331 Supported: No 00:07:51.331 00:07:51.331 Persistent Memory Region Support 00:07:51.331 ================================ 00:07:51.331 Supported: No 00:07:51.331 00:07:51.332 Admin Command Set Attributes 00:07:51.332 ============================ 00:07:51.332 Security Send/Receive: Not Supported 00:07:51.332 Format NVM: Supported 00:07:51.332 Firmware Activate/Download: Not Supported 00:07:51.332 Namespace Management: Supported 00:07:51.332 Device Self-Test: Not Supported 00:07:51.332 Directives: Supported 00:07:51.332 NVMe-MI: Not Supported 00:07:51.332 Virtualization Management: Not Supported 00:07:51.332 Doorbell Buffer Config: Supported 00:07:51.332 Get LBA Status Capability: Not Supported 00:07:51.332 Command & Feature Lockdown Capability: Not Supported 00:07:51.332 Abort Command Limit: 4 00:07:51.332 Async Event Request Limit: 4 00:07:51.332 Number of Firmware Slots: N/A 00:07:51.332 Firmware Slot 1 Read-Only: N/A 00:07:51.332 Firmware Activation Without Reset: N/A 00:07:51.332 Multiple Update Detection Support: N/A 00:07:51.332 Firmware Update Granularity: No Information Provided 00:07:51.332 Per-Namespace SMART Log: Yes 00:07:51.332 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.332 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:51.332 Command Effects Log Page: Supported 00:07:51.332 Get Log Page Extended Data: Supported 00:07:51.332 Telemetry Log Pages: Not Supported 00:07:51.332 Persistent Event Log Pages: Not Supported 00:07:51.332 Supported Log Pages Log Page: May Support 00:07:51.332 Commands Supported & Effects Log Page: Not Supported 00:07:51.332 Feature Identifiers & Effects Log Page:May Support 00:07:51.332 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.332 Data Area 4 for Telemetry Log: Not Supported 00:07:51.332 Error Log Page Entries Supported: 1 00:07:51.332 Keep Alive: Not Supported 00:07:51.332 00:07:51.332 NVM Command Set Attributes 00:07:51.332 ========================== 00:07:51.332 Submission Queue Entry Size 00:07:51.332 Max: 64 00:07:51.332 Min: 64 00:07:51.332 Completion Queue Entry Size 00:07:51.332 Max: 16 00:07:51.332 Min: 16 00:07:51.332 Number of Namespaces: 256 00:07:51.332 Compare Command: Supported 00:07:51.332 Write Uncorrectable Command: Not Supported 00:07:51.332 Dataset Management Command: Supported 00:07:51.332 Write Zeroes Command: Supported 00:07:51.332 Set Features Save Field: Supported 00:07:51.332 Reservations: Not Supported 00:07:51.332 Timestamp: Supported 00:07:51.332 Copy: Supported 00:07:51.332 Volatile Write Cache: Present 00:07:51.332 Atomic Write Unit (Normal): 1 00:07:51.332 Atomic Write Unit (PFail): 1 00:07:51.332 Atomic Compare & Write Unit: 1 00:07:51.332 Fused Compare & Write: Not Supported 00:07:51.332 Scatter-Gather List 00:07:51.332 SGL Command Set: Supported 00:07:51.332 SGL Keyed: Not Supported 00:07:51.332 SGL Bit Bucket Descriptor: Not Supported 00:07:51.332 SGL Metadata Pointer: Not Supported 00:07:51.332 Oversized SGL: Not Supported 00:07:51.332 SGL Metadata Address: Not Supported 00:07:51.332 SGL Offset: Not Supported 00:07:51.332 Transport SGL Data Block: Not Supported 00:07:51.332 Replay Protected Memory Block: Not Supported 00:07:51.332 00:07:51.332 Firmware Slot Information 00:07:51.332 ========================= 00:07:51.332 Active slot: 1 00:07:51.332 Slot 1 Firmware Revision: 1.0 00:07:51.332 00:07:51.332 00:07:51.332 Commands Supported and Effects 00:07:51.332 ============================== 00:07:51.332 Admin Commands 00:07:51.332 -------------- 00:07:51.332 Delete I/O Submission Queue (00h): Supported 00:07:51.332 Create I/O Submission Queue (01h): Supported 00:07:51.332 Get Log Page (02h): Supported 00:07:51.332 Delete I/O Completion Queue (04h): Supported 00:07:51.332 Create I/O Completion Queue (05h): Supported 00:07:51.332 Identify (06h): Supported 00:07:51.332 Abort (08h): Supported 00:07:51.332 Set Features (09h): Supported 00:07:51.332 Get Features (0Ah): Supported 00:07:51.332 Asynchronous Event Request (0Ch): Supported 00:07:51.332 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.332 Directive Send (19h): Supported 00:07:51.332 Directive Receive (1Ah): Supported 00:07:51.332 Virtualization Management (1Ch): Supported 00:07:51.332 Doorbell Buffer Config (7Ch): Supported 00:07:51.332 Format NVM (80h): Supported LBA-Change 00:07:51.332 I/O Commands 00:07:51.332 ------------ 00:07:51.332 Flush (00h): Supported LBA-Change 00:07:51.332 Write (01h): Supported LBA-Change 00:07:51.332 Read (02h): Supported 00:07:51.332 Compare (05h): Supported 00:07:51.332 Write Zeroes (08h): Supported LBA-Change 00:07:51.332 Dataset Management (09h): Supported LBA-Change 00:07:51.332 Unknown (0Ch): Supported 00:07:51.332 Unknown (12h): Supported 00:07:51.332 Copy (19h): Supported LBA-Change 00:07:51.332 Unknown (1Dh): Supported LBA-Change 00:07:51.332 00:07:51.332 Error Log 00:07:51.332 ========= 00:07:51.332 00:07:51.332 Arbitration 00:07:51.332 =========== 00:07:51.332 Arbitration Burst: no limit 00:07:51.332 00:07:51.332 Power Management 00:07:51.332 ================ 00:07:51.332 Number of Power States: 1 00:07:51.332 Current Power State: Power State #0 00:07:51.332 Power State #0: 00:07:51.332 Max Power: 25.00 W 00:07:51.332 Non-Operational State: Operational 00:07:51.332 Entry Latency: 16 microseconds 00:07:51.332 Exit Latency: 4 microseconds 00:07:51.332 Relative Read Throughput: 0 00:07:51.332 Relative Read Latency: 0 00:07:51.332 Relative Write Throughput: 0 00:07:51.332 Relative Write Latency: 0 00:07:51.332 Idle Power: Not Reported 00:07:51.332 Active Power: Not Reported 00:07:51.332 Non-Operational Permissive Mode: Not Supported 00:07:51.332 00:07:51.332 Health Information 00:07:51.332 ================== 00:07:51.332 Critical Warnings: 00:07:51.332 Available Spare Space: OK 00:07:51.332 Temperature: OK 00:07:51.332 Device Reliability: OK 00:07:51.332 Read Only: No 00:07:51.332 Volatile Memory Backup: OK 00:07:51.332 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.332 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.332 Available Spare: 0% 00:07:51.332 Available Spare Threshold: 0% 00:07:51.332 Life Percentage Used: 0% 00:07:51.332 Data Units Read: 620 00:07:51.332 Data Units Written: 548 00:07:51.332 Host Read Commands: 33361 00:07:51.332 Host Write Commands: 33147 00:07:51.332 Controller Busy Time: 0 minutes 00:07:51.332 Power Cycles: 0 00:07:51.332 Power On Hours: 0 hours 00:07:51.332 Unsafe Shutdowns: 0 00:07:51.332 Unrecoverable Media Errors: 0 00:07:51.332 Lifetime Error Log Entries: 0 00:07:51.332 Warning Temperature Time: 0 minutes 00:07:51.332 Critical Temperature Time: 0 minutes 00:07:51.332 00:07:51.332 Number of Queues 00:07:51.332 ================ 00:07:51.332 Number of I/O Submission Queues: 64 00:07:51.332 Number of I/O Completion Queues: 64 00:07:51.332 00:07:51.332 ZNS Specific Controller Data 00:07:51.332 ============================ 00:07:51.332 Zone Append Size Limit: 0 00:07:51.332 00:07:51.332 00:07:51.332 Active Namespaces 00:07:51.332 ================= 00:07:51.332 Namespace ID:1 00:07:51.332 Error Recovery Timeout: Unlimited 00:07:51.332 Command Set Identifier: NVM (00h) 00:07:51.332 Deallocate: Supported 00:07:51.332 Deallocated/Unwritten Error: Supported 00:07:51.332 Deallocated Read Value: All 0x00 00:07:51.332 Deallocate in Write Zeroes: Not Supported 00:07:51.332 Deallocated Guard Field: 0xFFFF 00:07:51.332 Flush: Supported 00:07:51.332 Reservation: Not Supported 00:07:51.332 Metadata Transferred as: Separate Metadata Buffer 00:07:51.332 Namespace Sharing Capabilities: Private 00:07:51.332 Size (in LBAs): 1548666 (5GiB) 00:07:51.332 Capacity (in LBAs): 1548666 (5GiB) 00:07:51.332 Utilization (in LBAs): 1548666 (5GiB) 00:07:51.332 Thin Provisioning: Not Supported 00:07:51.332 Per-NS Atomic Units: No 00:07:51.332 Maximum Single Source Range Length: 128 00:07:51.332 Maximum Copy Length: 128 00:07:51.332 Maximum Source Range Count: 128 00:07:51.332 NGUID/EUI64 Never Reused: No 00:07:51.332 Namespace Write Protected: No 00:07:51.332 Number of LBA Formats: 8 00:07:51.332 Current LBA Format: LBA Format #07 00:07:51.332 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.332 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.332 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.332 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.332 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.332 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.332 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.332 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.332 00:07:51.332 NVM Specific Namespace Data 00:07:51.332 =========================== 00:07:51.332 Logical Block Storage Tag Mask: 0 00:07:51.332 Protection Information Capabilities: 00:07:51.333 16b Guard Protection Information Storage Tag Support: No 00:07:51.333 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.333 Storage Tag Check Read Support: No 00:07:51.333 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.333 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.333 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.333 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.333 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.333 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.333 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.333 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.333 17:38:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:51.333 17:38:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:51.594 ===================================================== 00:07:51.594 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.594 ===================================================== 00:07:51.594 Controller Capabilities/Features 00:07:51.594 ================================ 00:07:51.594 Vendor ID: 1b36 00:07:51.594 Subsystem Vendor ID: 1af4 00:07:51.594 Serial Number: 12341 00:07:51.594 Model Number: QEMU NVMe Ctrl 00:07:51.594 Firmware Version: 8.0.0 00:07:51.594 Recommended Arb Burst: 6 00:07:51.594 IEEE OUI Identifier: 00 54 52 00:07:51.594 Multi-path I/O 00:07:51.594 May have multiple subsystem ports: No 00:07:51.594 May have multiple controllers: No 00:07:51.594 Associated with SR-IOV VF: No 00:07:51.594 Max Data Transfer Size: 524288 00:07:51.594 Max Number of Namespaces: 256 00:07:51.594 Max Number of I/O Queues: 64 00:07:51.594 NVMe Specification Version (VS): 1.4 00:07:51.594 NVMe Specification Version (Identify): 1.4 00:07:51.594 Maximum Queue Entries: 2048 00:07:51.594 Contiguous Queues Required: Yes 00:07:51.594 Arbitration Mechanisms Supported 00:07:51.594 Weighted Round Robin: Not Supported 00:07:51.594 Vendor Specific: Not Supported 00:07:51.594 Reset Timeout: 7500 ms 00:07:51.594 Doorbell Stride: 4 bytes 00:07:51.594 NVM Subsystem Reset: Not Supported 00:07:51.594 Command Sets Supported 00:07:51.594 NVM Command Set: Supported 00:07:51.594 Boot Partition: Not Supported 00:07:51.594 Memory Page Size Minimum: 4096 bytes 00:07:51.594 Memory Page Size Maximum: 65536 bytes 00:07:51.595 Persistent Memory Region: Not Supported 00:07:51.595 Optional Asynchronous Events Supported 00:07:51.595 Namespace Attribute Notices: Supported 00:07:51.595 Firmware Activation Notices: Not Supported 00:07:51.595 ANA Change Notices: Not Supported 00:07:51.595 PLE Aggregate Log Change Notices: Not Supported 00:07:51.595 LBA Status Info Alert Notices: Not Supported 00:07:51.595 EGE Aggregate Log Change Notices: Not Supported 00:07:51.595 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.595 Zone Descriptor Change Notices: Not Supported 00:07:51.595 Discovery Log Change Notices: Not Supported 00:07:51.595 Controller Attributes 00:07:51.595 128-bit Host Identifier: Not Supported 00:07:51.595 Non-Operational Permissive Mode: Not Supported 00:07:51.595 NVM Sets: Not Supported 00:07:51.595 Read Recovery Levels: Not Supported 00:07:51.595 Endurance Groups: Not Supported 00:07:51.595 Predictable Latency Mode: Not Supported 00:07:51.595 Traffic Based Keep ALive: Not Supported 00:07:51.595 Namespace Granularity: Not Supported 00:07:51.595 SQ Associations: Not Supported 00:07:51.595 UUID List: Not Supported 00:07:51.595 Multi-Domain Subsystem: Not Supported 00:07:51.595 Fixed Capacity Management: Not Supported 00:07:51.595 Variable Capacity Management: Not Supported 00:07:51.595 Delete Endurance Group: Not Supported 00:07:51.595 Delete NVM Set: Not Supported 00:07:51.595 Extended LBA Formats Supported: Supported 00:07:51.595 Flexible Data Placement Supported: Not Supported 00:07:51.595 00:07:51.595 Controller Memory Buffer Support 00:07:51.595 ================================ 00:07:51.595 Supported: No 00:07:51.595 00:07:51.595 Persistent Memory Region Support 00:07:51.595 ================================ 00:07:51.595 Supported: No 00:07:51.595 00:07:51.595 Admin Command Set Attributes 00:07:51.595 ============================ 00:07:51.595 Security Send/Receive: Not Supported 00:07:51.595 Format NVM: Supported 00:07:51.595 Firmware Activate/Download: Not Supported 00:07:51.595 Namespace Management: Supported 00:07:51.595 Device Self-Test: Not Supported 00:07:51.595 Directives: Supported 00:07:51.595 NVMe-MI: Not Supported 00:07:51.595 Virtualization Management: Not Supported 00:07:51.595 Doorbell Buffer Config: Supported 00:07:51.595 Get LBA Status Capability: Not Supported 00:07:51.595 Command & Feature Lockdown Capability: Not Supported 00:07:51.595 Abort Command Limit: 4 00:07:51.595 Async Event Request Limit: 4 00:07:51.595 Number of Firmware Slots: N/A 00:07:51.595 Firmware Slot 1 Read-Only: N/A 00:07:51.595 Firmware Activation Without Reset: N/A 00:07:51.595 Multiple Update Detection Support: N/A 00:07:51.595 Firmware Update Granularity: No Information Provided 00:07:51.595 Per-Namespace SMART Log: Yes 00:07:51.595 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.595 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:51.595 Command Effects Log Page: Supported 00:07:51.595 Get Log Page Extended Data: Supported 00:07:51.595 Telemetry Log Pages: Not Supported 00:07:51.595 Persistent Event Log Pages: Not Supported 00:07:51.595 Supported Log Pages Log Page: May Support 00:07:51.595 Commands Supported & Effects Log Page: Not Supported 00:07:51.595 Feature Identifiers & Effects Log Page:May Support 00:07:51.595 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.595 Data Area 4 for Telemetry Log: Not Supported 00:07:51.595 Error Log Page Entries Supported: 1 00:07:51.595 Keep Alive: Not Supported 00:07:51.595 00:07:51.595 NVM Command Set Attributes 00:07:51.595 ========================== 00:07:51.595 Submission Queue Entry Size 00:07:51.595 Max: 64 00:07:51.595 Min: 64 00:07:51.595 Completion Queue Entry Size 00:07:51.595 Max: 16 00:07:51.595 Min: 16 00:07:51.595 Number of Namespaces: 256 00:07:51.595 Compare Command: Supported 00:07:51.595 Write Uncorrectable Command: Not Supported 00:07:51.595 Dataset Management Command: Supported 00:07:51.595 Write Zeroes Command: Supported 00:07:51.595 Set Features Save Field: Supported 00:07:51.595 Reservations: Not Supported 00:07:51.595 Timestamp: Supported 00:07:51.595 Copy: Supported 00:07:51.595 Volatile Write Cache: Present 00:07:51.595 Atomic Write Unit (Normal): 1 00:07:51.595 Atomic Write Unit (PFail): 1 00:07:51.595 Atomic Compare & Write Unit: 1 00:07:51.595 Fused Compare & Write: Not Supported 00:07:51.595 Scatter-Gather List 00:07:51.595 SGL Command Set: Supported 00:07:51.595 SGL Keyed: Not Supported 00:07:51.595 SGL Bit Bucket Descriptor: Not Supported 00:07:51.595 SGL Metadata Pointer: Not Supported 00:07:51.595 Oversized SGL: Not Supported 00:07:51.595 SGL Metadata Address: Not Supported 00:07:51.595 SGL Offset: Not Supported 00:07:51.595 Transport SGL Data Block: Not Supported 00:07:51.595 Replay Protected Memory Block: Not Supported 00:07:51.595 00:07:51.595 Firmware Slot Information 00:07:51.595 ========================= 00:07:51.595 Active slot: 1 00:07:51.595 Slot 1 Firmware Revision: 1.0 00:07:51.595 00:07:51.595 00:07:51.595 Commands Supported and Effects 00:07:51.595 ============================== 00:07:51.595 Admin Commands 00:07:51.595 -------------- 00:07:51.595 Delete I/O Submission Queue (00h): Supported 00:07:51.595 Create I/O Submission Queue (01h): Supported 00:07:51.595 Get Log Page (02h): Supported 00:07:51.595 Delete I/O Completion Queue (04h): Supported 00:07:51.595 Create I/O Completion Queue (05h): Supported 00:07:51.595 Identify (06h): Supported 00:07:51.595 Abort (08h): Supported 00:07:51.595 Set Features (09h): Supported 00:07:51.595 Get Features (0Ah): Supported 00:07:51.595 Asynchronous Event Request (0Ch): Supported 00:07:51.595 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.595 Directive Send (19h): Supported 00:07:51.595 Directive Receive (1Ah): Supported 00:07:51.595 Virtualization Management (1Ch): Supported 00:07:51.595 Doorbell Buffer Config (7Ch): Supported 00:07:51.595 Format NVM (80h): Supported LBA-Change 00:07:51.595 I/O Commands 00:07:51.595 ------------ 00:07:51.595 Flush (00h): Supported LBA-Change 00:07:51.595 Write (01h): Supported LBA-Change 00:07:51.595 Read (02h): Supported 00:07:51.595 Compare (05h): Supported 00:07:51.595 Write Zeroes (08h): Supported LBA-Change 00:07:51.595 Dataset Management (09h): Supported LBA-Change 00:07:51.595 Unknown (0Ch): Supported 00:07:51.595 Unknown (12h): Supported 00:07:51.595 Copy (19h): Supported LBA-Change 00:07:51.595 Unknown (1Dh): Supported LBA-Change 00:07:51.595 00:07:51.595 Error Log 00:07:51.595 ========= 00:07:51.595 00:07:51.595 Arbitration 00:07:51.595 =========== 00:07:51.595 Arbitration Burst: no limit 00:07:51.595 00:07:51.595 Power Management 00:07:51.595 ================ 00:07:51.595 Number of Power States: 1 00:07:51.595 Current Power State: Power State #0 00:07:51.595 Power State #0: 00:07:51.595 Max Power: 25.00 W 00:07:51.595 Non-Operational State: Operational 00:07:51.595 Entry Latency: 16 microseconds 00:07:51.595 Exit Latency: 4 microseconds 00:07:51.595 Relative Read Throughput: 0 00:07:51.595 Relative Read Latency: 0 00:07:51.595 Relative Write Throughput: 0 00:07:51.595 Relative Write Latency: 0 00:07:51.595 Idle Power: Not Reported 00:07:51.595 Active Power: Not Reported 00:07:51.595 Non-Operational Permissive Mode: Not Supported 00:07:51.595 00:07:51.595 Health Information 00:07:51.595 ================== 00:07:51.595 Critical Warnings: 00:07:51.595 Available Spare Space: OK 00:07:51.595 Temperature: OK 00:07:51.595 Device Reliability: OK 00:07:51.595 Read Only: No 00:07:51.595 Volatile Memory Backup: OK 00:07:51.595 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.595 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.595 Available Spare: 0% 00:07:51.595 Available Spare Threshold: 0% 00:07:51.595 Life Percentage Used: 0% 00:07:51.595 Data Units Read: 942 00:07:51.595 Data Units Written: 815 00:07:51.595 Host Read Commands: 48945 00:07:51.595 Host Write Commands: 47837 00:07:51.595 Controller Busy Time: 0 minutes 00:07:51.595 Power Cycles: 0 00:07:51.595 Power On Hours: 0 hours 00:07:51.595 Unsafe Shutdowns: 0 00:07:51.595 Unrecoverable Media Errors: 0 00:07:51.595 Lifetime Error Log Entries: 0 00:07:51.595 Warning Temperature Time: 0 minutes 00:07:51.595 Critical Temperature Time: 0 minutes 00:07:51.595 00:07:51.595 Number of Queues 00:07:51.595 ================ 00:07:51.595 Number of I/O Submission Queues: 64 00:07:51.595 Number of I/O Completion Queues: 64 00:07:51.595 00:07:51.595 ZNS Specific Controller Data 00:07:51.595 ============================ 00:07:51.595 Zone Append Size Limit: 0 00:07:51.595 00:07:51.595 00:07:51.595 Active Namespaces 00:07:51.595 ================= 00:07:51.595 Namespace ID:1 00:07:51.596 Error Recovery Timeout: Unlimited 00:07:51.596 Command Set Identifier: NVM (00h) 00:07:51.596 Deallocate: Supported 00:07:51.596 Deallocated/Unwritten Error: Supported 00:07:51.596 Deallocated Read Value: All 0x00 00:07:51.596 Deallocate in Write Zeroes: Not Supported 00:07:51.596 Deallocated Guard Field: 0xFFFF 00:07:51.596 Flush: Supported 00:07:51.596 Reservation: Not Supported 00:07:51.596 Namespace Sharing Capabilities: Private 00:07:51.596 Size (in LBAs): 1310720 (5GiB) 00:07:51.596 Capacity (in LBAs): 1310720 (5GiB) 00:07:51.596 Utilization (in LBAs): 1310720 (5GiB) 00:07:51.596 Thin Provisioning: Not Supported 00:07:51.596 Per-NS Atomic Units: No 00:07:51.596 Maximum Single Source Range Length: 128 00:07:51.596 Maximum Copy Length: 128 00:07:51.596 Maximum Source Range Count: 128 00:07:51.596 NGUID/EUI64 Never Reused: No 00:07:51.596 Namespace Write Protected: No 00:07:51.596 Number of LBA Formats: 8 00:07:51.596 Current LBA Format: LBA Format #04 00:07:51.596 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.596 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.596 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.596 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.596 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.596 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.596 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.596 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.596 00:07:51.596 NVM Specific Namespace Data 00:07:51.596 =========================== 00:07:51.596 Logical Block Storage Tag Mask: 0 00:07:51.596 Protection Information Capabilities: 00:07:51.596 16b Guard Protection Information Storage Tag Support: No 00:07:51.596 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.596 Storage Tag Check Read Support: No 00:07:51.596 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.596 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.596 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.596 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.596 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.596 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.596 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.596 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.596 17:38:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:51.596 17:38:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:51.857 ===================================================== 00:07:51.857 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.857 ===================================================== 00:07:51.857 Controller Capabilities/Features 00:07:51.857 ================================ 00:07:51.857 Vendor ID: 1b36 00:07:51.857 Subsystem Vendor ID: 1af4 00:07:51.857 Serial Number: 12342 00:07:51.857 Model Number: QEMU NVMe Ctrl 00:07:51.857 Firmware Version: 8.0.0 00:07:51.857 Recommended Arb Burst: 6 00:07:51.857 IEEE OUI Identifier: 00 54 52 00:07:51.857 Multi-path I/O 00:07:51.857 May have multiple subsystem ports: No 00:07:51.857 May have multiple controllers: No 00:07:51.857 Associated with SR-IOV VF: No 00:07:51.857 Max Data Transfer Size: 524288 00:07:51.857 Max Number of Namespaces: 256 00:07:51.857 Max Number of I/O Queues: 64 00:07:51.857 NVMe Specification Version (VS): 1.4 00:07:51.857 NVMe Specification Version (Identify): 1.4 00:07:51.857 Maximum Queue Entries: 2048 00:07:51.857 Contiguous Queues Required: Yes 00:07:51.857 Arbitration Mechanisms Supported 00:07:51.857 Weighted Round Robin: Not Supported 00:07:51.857 Vendor Specific: Not Supported 00:07:51.857 Reset Timeout: 7500 ms 00:07:51.857 Doorbell Stride: 4 bytes 00:07:51.857 NVM Subsystem Reset: Not Supported 00:07:51.857 Command Sets Supported 00:07:51.857 NVM Command Set: Supported 00:07:51.857 Boot Partition: Not Supported 00:07:51.857 Memory Page Size Minimum: 4096 bytes 00:07:51.857 Memory Page Size Maximum: 65536 bytes 00:07:51.857 Persistent Memory Region: Not Supported 00:07:51.857 Optional Asynchronous Events Supported 00:07:51.857 Namespace Attribute Notices: Supported 00:07:51.857 Firmware Activation Notices: Not Supported 00:07:51.857 ANA Change Notices: Not Supported 00:07:51.857 PLE Aggregate Log Change Notices: Not Supported 00:07:51.857 LBA Status Info Alert Notices: Not Supported 00:07:51.857 EGE Aggregate Log Change Notices: Not Supported 00:07:51.857 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.857 Zone Descriptor Change Notices: Not Supported 00:07:51.857 Discovery Log Change Notices: Not Supported 00:07:51.857 Controller Attributes 00:07:51.857 128-bit Host Identifier: Not Supported 00:07:51.857 Non-Operational Permissive Mode: Not Supported 00:07:51.857 NVM Sets: Not Supported 00:07:51.857 Read Recovery Levels: Not Supported 00:07:51.857 Endurance Groups: Not Supported 00:07:51.857 Predictable Latency Mode: Not Supported 00:07:51.857 Traffic Based Keep ALive: Not Supported 00:07:51.857 Namespace Granularity: Not Supported 00:07:51.857 SQ Associations: Not Supported 00:07:51.857 UUID List: Not Supported 00:07:51.857 Multi-Domain Subsystem: Not Supported 00:07:51.857 Fixed Capacity Management: Not Supported 00:07:51.857 Variable Capacity Management: Not Supported 00:07:51.857 Delete Endurance Group: Not Supported 00:07:51.857 Delete NVM Set: Not Supported 00:07:51.857 Extended LBA Formats Supported: Supported 00:07:51.857 Flexible Data Placement Supported: Not Supported 00:07:51.857 00:07:51.857 Controller Memory Buffer Support 00:07:51.857 ================================ 00:07:51.857 Supported: No 00:07:51.857 00:07:51.857 Persistent Memory Region Support 00:07:51.857 ================================ 00:07:51.857 Supported: No 00:07:51.857 00:07:51.857 Admin Command Set Attributes 00:07:51.857 ============================ 00:07:51.857 Security Send/Receive: Not Supported 00:07:51.857 Format NVM: Supported 00:07:51.857 Firmware Activate/Download: Not Supported 00:07:51.857 Namespace Management: Supported 00:07:51.857 Device Self-Test: Not Supported 00:07:51.857 Directives: Supported 00:07:51.857 NVMe-MI: Not Supported 00:07:51.857 Virtualization Management: Not Supported 00:07:51.857 Doorbell Buffer Config: Supported 00:07:51.857 Get LBA Status Capability: Not Supported 00:07:51.857 Command & Feature Lockdown Capability: Not Supported 00:07:51.857 Abort Command Limit: 4 00:07:51.857 Async Event Request Limit: 4 00:07:51.857 Number of Firmware Slots: N/A 00:07:51.857 Firmware Slot 1 Read-Only: N/A 00:07:51.857 Firmware Activation Without Reset: N/A 00:07:51.857 Multiple Update Detection Support: N/A 00:07:51.857 Firmware Update Granularity: No Information Provided 00:07:51.857 Per-Namespace SMART Log: Yes 00:07:51.857 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.857 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:51.857 Command Effects Log Page: Supported 00:07:51.857 Get Log Page Extended Data: Supported 00:07:51.857 Telemetry Log Pages: Not Supported 00:07:51.857 Persistent Event Log Pages: Not Supported 00:07:51.857 Supported Log Pages Log Page: May Support 00:07:51.857 Commands Supported & Effects Log Page: Not Supported 00:07:51.857 Feature Identifiers & Effects Log Page:May Support 00:07:51.857 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.857 Data Area 4 for Telemetry Log: Not Supported 00:07:51.857 Error Log Page Entries Supported: 1 00:07:51.857 Keep Alive: Not Supported 00:07:51.857 00:07:51.857 NVM Command Set Attributes 00:07:51.857 ========================== 00:07:51.857 Submission Queue Entry Size 00:07:51.857 Max: 64 00:07:51.857 Min: 64 00:07:51.857 Completion Queue Entry Size 00:07:51.857 Max: 16 00:07:51.857 Min: 16 00:07:51.857 Number of Namespaces: 256 00:07:51.857 Compare Command: Supported 00:07:51.857 Write Uncorrectable Command: Not Supported 00:07:51.857 Dataset Management Command: Supported 00:07:51.857 Write Zeroes Command: Supported 00:07:51.857 Set Features Save Field: Supported 00:07:51.857 Reservations: Not Supported 00:07:51.857 Timestamp: Supported 00:07:51.857 Copy: Supported 00:07:51.857 Volatile Write Cache: Present 00:07:51.857 Atomic Write Unit (Normal): 1 00:07:51.857 Atomic Write Unit (PFail): 1 00:07:51.857 Atomic Compare & Write Unit: 1 00:07:51.857 Fused Compare & Write: Not Supported 00:07:51.857 Scatter-Gather List 00:07:51.857 SGL Command Set: Supported 00:07:51.857 SGL Keyed: Not Supported 00:07:51.857 SGL Bit Bucket Descriptor: Not Supported 00:07:51.857 SGL Metadata Pointer: Not Supported 00:07:51.857 Oversized SGL: Not Supported 00:07:51.857 SGL Metadata Address: Not Supported 00:07:51.857 SGL Offset: Not Supported 00:07:51.857 Transport SGL Data Block: Not Supported 00:07:51.857 Replay Protected Memory Block: Not Supported 00:07:51.857 00:07:51.857 Firmware Slot Information 00:07:51.858 ========================= 00:07:51.858 Active slot: 1 00:07:51.858 Slot 1 Firmware Revision: 1.0 00:07:51.858 00:07:51.858 00:07:51.858 Commands Supported and Effects 00:07:51.858 ============================== 00:07:51.858 Admin Commands 00:07:51.858 -------------- 00:07:51.858 Delete I/O Submission Queue (00h): Supported 00:07:51.858 Create I/O Submission Queue (01h): Supported 00:07:51.858 Get Log Page (02h): Supported 00:07:51.858 Delete I/O Completion Queue (04h): Supported 00:07:51.858 Create I/O Completion Queue (05h): Supported 00:07:51.858 Identify (06h): Supported 00:07:51.858 Abort (08h): Supported 00:07:51.858 Set Features (09h): Supported 00:07:51.858 Get Features (0Ah): Supported 00:07:51.858 Asynchronous Event Request (0Ch): Supported 00:07:51.858 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.858 Directive Send (19h): Supported 00:07:51.858 Directive Receive (1Ah): Supported 00:07:51.858 Virtualization Management (1Ch): Supported 00:07:51.858 Doorbell Buffer Config (7Ch): Supported 00:07:51.858 Format NVM (80h): Supported LBA-Change 00:07:51.858 I/O Commands 00:07:51.858 ------------ 00:07:51.858 Flush (00h): Supported LBA-Change 00:07:51.858 Write (01h): Supported LBA-Change 00:07:51.858 Read (02h): Supported 00:07:51.858 Compare (05h): Supported 00:07:51.858 Write Zeroes (08h): Supported LBA-Change 00:07:51.858 Dataset Management (09h): Supported LBA-Change 00:07:51.858 Unknown (0Ch): Supported 00:07:51.858 Unknown (12h): Supported 00:07:51.858 Copy (19h): Supported LBA-Change 00:07:51.858 Unknown (1Dh): Supported LBA-Change 00:07:51.858 00:07:51.858 Error Log 00:07:51.858 ========= 00:07:51.858 00:07:51.858 Arbitration 00:07:51.858 =========== 00:07:51.858 Arbitration Burst: no limit 00:07:51.858 00:07:51.858 Power Management 00:07:51.858 ================ 00:07:51.858 Number of Power States: 1 00:07:51.858 Current Power State: Power State #0 00:07:51.858 Power State #0: 00:07:51.858 Max Power: 25.00 W 00:07:51.858 Non-Operational State: Operational 00:07:51.858 Entry Latency: 16 microseconds 00:07:51.858 Exit Latency: 4 microseconds 00:07:51.858 Relative Read Throughput: 0 00:07:51.858 Relative Read Latency: 0 00:07:51.858 Relative Write Throughput: 0 00:07:51.858 Relative Write Latency: 0 00:07:51.858 Idle Power: Not Reported 00:07:51.858 Active Power: Not Reported 00:07:51.858 Non-Operational Permissive Mode: Not Supported 00:07:51.858 00:07:51.858 Health Information 00:07:51.858 ================== 00:07:51.858 Critical Warnings: 00:07:51.858 Available Spare Space: OK 00:07:51.858 Temperature: OK 00:07:51.858 Device Reliability: OK 00:07:51.858 Read Only: No 00:07:51.858 Volatile Memory Backup: OK 00:07:51.858 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.858 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.858 Available Spare: 0% 00:07:51.858 Available Spare Threshold: 0% 00:07:51.858 Life Percentage Used: 0% 00:07:51.858 Data Units Read: 1993 00:07:51.858 Data Units Written: 1780 00:07:51.858 Host Read Commands: 101487 00:07:51.858 Host Write Commands: 99756 00:07:51.858 Controller Busy Time: 0 minutes 00:07:51.858 Power Cycles: 0 00:07:51.858 Power On Hours: 0 hours 00:07:51.858 Unsafe Shutdowns: 0 00:07:51.858 Unrecoverable Media Errors: 0 00:07:51.858 Lifetime Error Log Entries: 0 00:07:51.858 Warning Temperature Time: 0 minutes 00:07:51.858 Critical Temperature Time: 0 minutes 00:07:51.858 00:07:51.858 Number of Queues 00:07:51.858 ================ 00:07:51.858 Number of I/O Submission Queues: 64 00:07:51.858 Number of I/O Completion Queues: 64 00:07:51.858 00:07:51.858 ZNS Specific Controller Data 00:07:51.858 ============================ 00:07:51.858 Zone Append Size Limit: 0 00:07:51.858 00:07:51.858 00:07:51.858 Active Namespaces 00:07:51.858 ================= 00:07:51.858 Namespace ID:1 00:07:51.858 Error Recovery Timeout: Unlimited 00:07:51.858 Command Set Identifier: NVM (00h) 00:07:51.858 Deallocate: Supported 00:07:51.858 Deallocated/Unwritten Error: Supported 00:07:51.858 Deallocated Read Value: All 0x00 00:07:51.858 Deallocate in Write Zeroes: Not Supported 00:07:51.858 Deallocated Guard Field: 0xFFFF 00:07:51.858 Flush: Supported 00:07:51.858 Reservation: Not Supported 00:07:51.858 Namespace Sharing Capabilities: Private 00:07:51.858 Size (in LBAs): 1048576 (4GiB) 00:07:51.858 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.858 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.858 Thin Provisioning: Not Supported 00:07:51.858 Per-NS Atomic Units: No 00:07:51.858 Maximum Single Source Range Length: 128 00:07:51.858 Maximum Copy Length: 128 00:07:51.858 Maximum Source Range Count: 128 00:07:51.858 NGUID/EUI64 Never Reused: No 00:07:51.858 Namespace Write Protected: No 00:07:51.858 Number of LBA Formats: 8 00:07:51.858 Current LBA Format: LBA Format #04 00:07:51.858 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.858 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.858 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.858 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.858 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.858 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.858 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.858 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.858 00:07:51.858 NVM Specific Namespace Data 00:07:51.858 =========================== 00:07:51.858 Logical Block Storage Tag Mask: 0 00:07:51.858 Protection Information Capabilities: 00:07:51.858 16b Guard Protection Information Storage Tag Support: No 00:07:51.858 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.858 Storage Tag Check Read Support: No 00:07:51.858 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Namespace ID:2 00:07:51.858 Error Recovery Timeout: Unlimited 00:07:51.858 Command Set Identifier: NVM (00h) 00:07:51.858 Deallocate: Supported 00:07:51.858 Deallocated/Unwritten Error: Supported 00:07:51.858 Deallocated Read Value: All 0x00 00:07:51.858 Deallocate in Write Zeroes: Not Supported 00:07:51.858 Deallocated Guard Field: 0xFFFF 00:07:51.858 Flush: Supported 00:07:51.858 Reservation: Not Supported 00:07:51.858 Namespace Sharing Capabilities: Private 00:07:51.858 Size (in LBAs): 1048576 (4GiB) 00:07:51.858 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.858 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.858 Thin Provisioning: Not Supported 00:07:51.858 Per-NS Atomic Units: No 00:07:51.858 Maximum Single Source Range Length: 128 00:07:51.858 Maximum Copy Length: 128 00:07:51.858 Maximum Source Range Count: 128 00:07:51.858 NGUID/EUI64 Never Reused: No 00:07:51.858 Namespace Write Protected: No 00:07:51.858 Number of LBA Formats: 8 00:07:51.858 Current LBA Format: LBA Format #04 00:07:51.858 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.858 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.858 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.858 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.858 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.858 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.858 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.858 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.858 00:07:51.858 NVM Specific Namespace Data 00:07:51.858 =========================== 00:07:51.858 Logical Block Storage Tag Mask: 0 00:07:51.858 Protection Information Capabilities: 00:07:51.858 16b Guard Protection Information Storage Tag Support: No 00:07:51.858 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.858 Storage Tag Check Read Support: No 00:07:51.858 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.858 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.859 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.859 Namespace ID:3 00:07:51.859 Error Recovery Timeout: Unlimited 00:07:51.859 Command Set Identifier: NVM (00h) 00:07:51.859 Deallocate: Supported 00:07:51.859 Deallocated/Unwritten Error: Supported 00:07:51.859 Deallocated Read Value: All 0x00 00:07:51.859 Deallocate in Write Zeroes: Not Supported 00:07:51.859 Deallocated Guard Field: 0xFFFF 00:07:51.859 Flush: Supported 00:07:51.859 Reservation: Not Supported 00:07:51.859 Namespace Sharing Capabilities: Private 00:07:51.859 Size (in LBAs): 1048576 (4GiB) 00:07:51.859 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.859 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.859 Thin Provisioning: Not Supported 00:07:51.859 Per-NS Atomic Units: No 00:07:51.859 Maximum Single Source Range Length: 128 00:07:51.859 Maximum Copy Length: 128 00:07:51.859 Maximum Source Range Count: 128 00:07:51.859 NGUID/EUI64 Never Reused: No 00:07:51.859 Namespace Write Protected: No 00:07:51.859 Number of LBA Formats: 8 00:07:51.859 Current LBA Format: LBA Format #04 00:07:51.859 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.859 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.859 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.859 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.859 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.859 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.859 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.859 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.859 00:07:51.859 NVM Specific Namespace Data 00:07:51.859 =========================== 00:07:51.859 Logical Block Storage Tag Mask: 0 00:07:51.859 Protection Information Capabilities: 00:07:51.859 16b Guard Protection Information Storage Tag Support: No 00:07:51.859 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.859 Storage Tag Check Read Support: No 00:07:51.859 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.859 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.859 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.859 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.859 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.859 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.859 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.859 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.859 17:38:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:51.859 17:38:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:52.119 ===================================================== 00:07:52.119 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:52.119 ===================================================== 00:07:52.119 Controller Capabilities/Features 00:07:52.119 ================================ 00:07:52.119 Vendor ID: 1b36 00:07:52.119 Subsystem Vendor ID: 1af4 00:07:52.119 Serial Number: 12343 00:07:52.119 Model Number: QEMU NVMe Ctrl 00:07:52.119 Firmware Version: 8.0.0 00:07:52.119 Recommended Arb Burst: 6 00:07:52.119 IEEE OUI Identifier: 00 54 52 00:07:52.119 Multi-path I/O 00:07:52.119 May have multiple subsystem ports: No 00:07:52.119 May have multiple controllers: Yes 00:07:52.119 Associated with SR-IOV VF: No 00:07:52.119 Max Data Transfer Size: 524288 00:07:52.119 Max Number of Namespaces: 256 00:07:52.119 Max Number of I/O Queues: 64 00:07:52.119 NVMe Specification Version (VS): 1.4 00:07:52.119 NVMe Specification Version (Identify): 1.4 00:07:52.119 Maximum Queue Entries: 2048 00:07:52.119 Contiguous Queues Required: Yes 00:07:52.119 Arbitration Mechanisms Supported 00:07:52.119 Weighted Round Robin: Not Supported 00:07:52.119 Vendor Specific: Not Supported 00:07:52.119 Reset Timeout: 7500 ms 00:07:52.119 Doorbell Stride: 4 bytes 00:07:52.119 NVM Subsystem Reset: Not Supported 00:07:52.119 Command Sets Supported 00:07:52.119 NVM Command Set: Supported 00:07:52.119 Boot Partition: Not Supported 00:07:52.119 Memory Page Size Minimum: 4096 bytes 00:07:52.119 Memory Page Size Maximum: 65536 bytes 00:07:52.119 Persistent Memory Region: Not Supported 00:07:52.119 Optional Asynchronous Events Supported 00:07:52.119 Namespace Attribute Notices: Supported 00:07:52.119 Firmware Activation Notices: Not Supported 00:07:52.119 ANA Change Notices: Not Supported 00:07:52.119 PLE Aggregate Log Change Notices: Not Supported 00:07:52.119 LBA Status Info Alert Notices: Not Supported 00:07:52.119 EGE Aggregate Log Change Notices: Not Supported 00:07:52.119 Normal NVM Subsystem Shutdown event: Not Supported 00:07:52.119 Zone Descriptor Change Notices: Not Supported 00:07:52.119 Discovery Log Change Notices: Not Supported 00:07:52.119 Controller Attributes 00:07:52.119 128-bit Host Identifier: Not Supported 00:07:52.120 Non-Operational Permissive Mode: Not Supported 00:07:52.120 NVM Sets: Not Supported 00:07:52.120 Read Recovery Levels: Not Supported 00:07:52.120 Endurance Groups: Supported 00:07:52.120 Predictable Latency Mode: Not Supported 00:07:52.120 Traffic Based Keep ALive: Not Supported 00:07:52.120 Namespace Granularity: Not Supported 00:07:52.120 SQ Associations: Not Supported 00:07:52.120 UUID List: Not Supported 00:07:52.120 Multi-Domain Subsystem: Not Supported 00:07:52.120 Fixed Capacity Management: Not Supported 00:07:52.120 Variable Capacity Management: Not Supported 00:07:52.120 Delete Endurance Group: Not Supported 00:07:52.120 Delete NVM Set: Not Supported 00:07:52.120 Extended LBA Formats Supported: Supported 00:07:52.120 Flexible Data Placement Supported: Supported 00:07:52.120 00:07:52.120 Controller Memory Buffer Support 00:07:52.120 ================================ 00:07:52.120 Supported: No 00:07:52.120 00:07:52.120 Persistent Memory Region Support 00:07:52.120 ================================ 00:07:52.120 Supported: No 00:07:52.120 00:07:52.120 Admin Command Set Attributes 00:07:52.120 ============================ 00:07:52.120 Security Send/Receive: Not Supported 00:07:52.120 Format NVM: Supported 00:07:52.120 Firmware Activate/Download: Not Supported 00:07:52.120 Namespace Management: Supported 00:07:52.120 Device Self-Test: Not Supported 00:07:52.120 Directives: Supported 00:07:52.120 NVMe-MI: Not Supported 00:07:52.120 Virtualization Management: Not Supported 00:07:52.120 Doorbell Buffer Config: Supported 00:07:52.120 Get LBA Status Capability: Not Supported 00:07:52.120 Command & Feature Lockdown Capability: Not Supported 00:07:52.120 Abort Command Limit: 4 00:07:52.120 Async Event Request Limit: 4 00:07:52.120 Number of Firmware Slots: N/A 00:07:52.120 Firmware Slot 1 Read-Only: N/A 00:07:52.120 Firmware Activation Without Reset: N/A 00:07:52.120 Multiple Update Detection Support: N/A 00:07:52.120 Firmware Update Granularity: No Information Provided 00:07:52.120 Per-Namespace SMART Log: Yes 00:07:52.120 Asymmetric Namespace Access Log Page: Not Supported 00:07:52.120 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:52.120 Command Effects Log Page: Supported 00:07:52.120 Get Log Page Extended Data: Supported 00:07:52.120 Telemetry Log Pages: Not Supported 00:07:52.120 Persistent Event Log Pages: Not Supported 00:07:52.120 Supported Log Pages Log Page: May Support 00:07:52.120 Commands Supported & Effects Log Page: Not Supported 00:07:52.120 Feature Identifiers & Effects Log Page:May Support 00:07:52.120 NVMe-MI Commands & Effects Log Page: May Support 00:07:52.120 Data Area 4 for Telemetry Log: Not Supported 00:07:52.120 Error Log Page Entries Supported: 1 00:07:52.120 Keep Alive: Not Supported 00:07:52.120 00:07:52.120 NVM Command Set Attributes 00:07:52.120 ========================== 00:07:52.120 Submission Queue Entry Size 00:07:52.120 Max: 64 00:07:52.120 Min: 64 00:07:52.120 Completion Queue Entry Size 00:07:52.120 Max: 16 00:07:52.120 Min: 16 00:07:52.120 Number of Namespaces: 256 00:07:52.120 Compare Command: Supported 00:07:52.120 Write Uncorrectable Command: Not Supported 00:07:52.120 Dataset Management Command: Supported 00:07:52.120 Write Zeroes Command: Supported 00:07:52.120 Set Features Save Field: Supported 00:07:52.120 Reservations: Not Supported 00:07:52.120 Timestamp: Supported 00:07:52.120 Copy: Supported 00:07:52.120 Volatile Write Cache: Present 00:07:52.120 Atomic Write Unit (Normal): 1 00:07:52.120 Atomic Write Unit (PFail): 1 00:07:52.120 Atomic Compare & Write Unit: 1 00:07:52.120 Fused Compare & Write: Not Supported 00:07:52.120 Scatter-Gather List 00:07:52.120 SGL Command Set: Supported 00:07:52.120 SGL Keyed: Not Supported 00:07:52.120 SGL Bit Bucket Descriptor: Not Supported 00:07:52.120 SGL Metadata Pointer: Not Supported 00:07:52.120 Oversized SGL: Not Supported 00:07:52.120 SGL Metadata Address: Not Supported 00:07:52.120 SGL Offset: Not Supported 00:07:52.120 Transport SGL Data Block: Not Supported 00:07:52.120 Replay Protected Memory Block: Not Supported 00:07:52.120 00:07:52.120 Firmware Slot Information 00:07:52.120 ========================= 00:07:52.120 Active slot: 1 00:07:52.120 Slot 1 Firmware Revision: 1.0 00:07:52.120 00:07:52.120 00:07:52.120 Commands Supported and Effects 00:07:52.120 ============================== 00:07:52.120 Admin Commands 00:07:52.120 -------------- 00:07:52.120 Delete I/O Submission Queue (00h): Supported 00:07:52.120 Create I/O Submission Queue (01h): Supported 00:07:52.120 Get Log Page (02h): Supported 00:07:52.120 Delete I/O Completion Queue (04h): Supported 00:07:52.120 Create I/O Completion Queue (05h): Supported 00:07:52.120 Identify (06h): Supported 00:07:52.120 Abort (08h): Supported 00:07:52.120 Set Features (09h): Supported 00:07:52.120 Get Features (0Ah): Supported 00:07:52.120 Asynchronous Event Request (0Ch): Supported 00:07:52.120 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:52.120 Directive Send (19h): Supported 00:07:52.120 Directive Receive (1Ah): Supported 00:07:52.120 Virtualization Management (1Ch): Supported 00:07:52.120 Doorbell Buffer Config (7Ch): Supported 00:07:52.120 Format NVM (80h): Supported LBA-Change 00:07:52.120 I/O Commands 00:07:52.120 ------------ 00:07:52.120 Flush (00h): Supported LBA-Change 00:07:52.120 Write (01h): Supported LBA-Change 00:07:52.120 Read (02h): Supported 00:07:52.120 Compare (05h): Supported 00:07:52.120 Write Zeroes (08h): Supported LBA-Change 00:07:52.120 Dataset Management (09h): Supported LBA-Change 00:07:52.120 Unknown (0Ch): Supported 00:07:52.120 Unknown (12h): Supported 00:07:52.120 Copy (19h): Supported LBA-Change 00:07:52.120 Unknown (1Dh): Supported LBA-Change 00:07:52.120 00:07:52.120 Error Log 00:07:52.120 ========= 00:07:52.120 00:07:52.120 Arbitration 00:07:52.120 =========== 00:07:52.120 Arbitration Burst: no limit 00:07:52.120 00:07:52.120 Power Management 00:07:52.120 ================ 00:07:52.120 Number of Power States: 1 00:07:52.120 Current Power State: Power State #0 00:07:52.120 Power State #0: 00:07:52.120 Max Power: 25.00 W 00:07:52.120 Non-Operational State: Operational 00:07:52.120 Entry Latency: 16 microseconds 00:07:52.120 Exit Latency: 4 microseconds 00:07:52.120 Relative Read Throughput: 0 00:07:52.120 Relative Read Latency: 0 00:07:52.120 Relative Write Throughput: 0 00:07:52.120 Relative Write Latency: 0 00:07:52.121 Idle Power: Not Reported 00:07:52.121 Active Power: Not Reported 00:07:52.121 Non-Operational Permissive Mode: Not Supported 00:07:52.121 00:07:52.121 Health Information 00:07:52.121 ================== 00:07:52.121 Critical Warnings: 00:07:52.121 Available Spare Space: OK 00:07:52.121 Temperature: OK 00:07:52.121 Device Reliability: OK 00:07:52.121 Read Only: No 00:07:52.121 Volatile Memory Backup: OK 00:07:52.121 Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.121 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:52.121 Available Spare: 0% 00:07:52.121 Available Spare Threshold: 0% 00:07:52.121 Life Percentage Used: 0% 00:07:52.121 Data Units Read: 798 00:07:52.121 Data Units Written: 727 00:07:52.121 Host Read Commands: 34933 00:07:52.121 Host Write Commands: 34356 00:07:52.121 Controller Busy Time: 0 minutes 00:07:52.121 Power Cycles: 0 00:07:52.121 Power On Hours: 0 hours 00:07:52.121 Unsafe Shutdowns: 0 00:07:52.121 Unrecoverable Media Errors: 0 00:07:52.121 Lifetime Error Log Entries: 0 00:07:52.121 Warning Temperature Time: 0 minutes 00:07:52.121 Critical Temperature Time: 0 minutes 00:07:52.121 00:07:52.121 Number of Queues 00:07:52.121 ================ 00:07:52.121 Number of I/O Submission Queues: 64 00:07:52.121 Number of I/O Completion Queues: 64 00:07:52.121 00:07:52.121 ZNS Specific Controller Data 00:07:52.121 ============================ 00:07:52.121 Zone Append Size Limit: 0 00:07:52.121 00:07:52.121 00:07:52.121 Active Namespaces 00:07:52.121 ================= 00:07:52.121 Namespace ID:1 00:07:52.121 Error Recovery Timeout: Unlimited 00:07:52.121 Command Set Identifier: NVM (00h) 00:07:52.121 Deallocate: Supported 00:07:52.121 Deallocated/Unwritten Error: Supported 00:07:52.121 Deallocated Read Value: All 0x00 00:07:52.121 Deallocate in Write Zeroes: Not Supported 00:07:52.121 Deallocated Guard Field: 0xFFFF 00:07:52.121 Flush: Supported 00:07:52.121 Reservation: Not Supported 00:07:52.121 Namespace Sharing Capabilities: Multiple Controllers 00:07:52.121 Size (in LBAs): 262144 (1GiB) 00:07:52.121 Capacity (in LBAs): 262144 (1GiB) 00:07:52.121 Utilization (in LBAs): 262144 (1GiB) 00:07:52.121 Thin Provisioning: Not Supported 00:07:52.121 Per-NS Atomic Units: No 00:07:52.121 Maximum Single Source Range Length: 128 00:07:52.121 Maximum Copy Length: 128 00:07:52.121 Maximum Source Range Count: 128 00:07:52.121 NGUID/EUI64 Never Reused: No 00:07:52.121 Namespace Write Protected: No 00:07:52.121 Endurance group ID: 1 00:07:52.121 Number of LBA Formats: 8 00:07:52.121 Current LBA Format: LBA Format #04 00:07:52.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:52.121 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:52.121 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:52.121 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:52.121 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:52.121 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:52.121 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:52.121 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:52.121 00:07:52.121 Get Feature FDP: 00:07:52.121 ================ 00:07:52.121 Enabled: Yes 00:07:52.121 FDP configuration index: 0 00:07:52.121 00:07:52.121 FDP configurations log page 00:07:52.121 =========================== 00:07:52.121 Number of FDP configurations: 1 00:07:52.121 Version: 0 00:07:52.121 Size: 112 00:07:52.121 FDP Configuration Descriptor: 0 00:07:52.121 Descriptor Size: 96 00:07:52.121 Reclaim Group Identifier format: 2 00:07:52.121 FDP Volatile Write Cache: Not Present 00:07:52.121 FDP Configuration: Valid 00:07:52.121 Vendor Specific Size: 0 00:07:52.121 Number of Reclaim Groups: 2 00:07:52.121 Number of Recalim Unit Handles: 8 00:07:52.121 Max Placement Identifiers: 128 00:07:52.121 Number of Namespaces Suppprted: 256 00:07:52.121 Reclaim unit Nominal Size: 6000000 bytes 00:07:52.121 Estimated Reclaim Unit Time Limit: Not Reported 00:07:52.121 RUH Desc #000: RUH Type: Initially Isolated 00:07:52.121 RUH Desc #001: RUH Type: Initially Isolated 00:07:52.121 RUH Desc #002: RUH Type: Initially Isolated 00:07:52.121 RUH Desc #003: RUH Type: Initially Isolated 00:07:52.121 RUH Desc #004: RUH Type: Initially Isolated 00:07:52.121 RUH Desc #005: RUH Type: Initially Isolated 00:07:52.121 RUH Desc #006: RUH Type: Initially Isolated 00:07:52.121 RUH Desc #007: RUH Type: Initially Isolated 00:07:52.121 00:07:52.121 FDP reclaim unit handle usage log page 00:07:52.121 ====================================== 00:07:52.121 Number of Reclaim Unit Handles: 8 00:07:52.121 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:52.121 RUH Usage Desc #001: RUH Attributes: Unused 00:07:52.121 RUH Usage Desc #002: RUH Attributes: Unused 00:07:52.121 RUH Usage Desc #003: RUH Attributes: Unused 00:07:52.121 RUH Usage Desc #004: RUH Attributes: Unused 00:07:52.121 RUH Usage Desc #005: RUH Attributes: Unused 00:07:52.121 RUH Usage Desc #006: RUH Attributes: Unused 00:07:52.121 RUH Usage Desc #007: RUH Attributes: Unused 00:07:52.121 00:07:52.121 FDP statistics log page 00:07:52.121 ======================= 00:07:52.121 Host bytes with metadata written: 402235392 00:07:52.121 Media bytes with metadata written: 402276352 00:07:52.121 Media bytes erased: 0 00:07:52.121 00:07:52.121 FDP events log page 00:07:52.121 =================== 00:07:52.121 Number of FDP events: 0 00:07:52.121 00:07:52.121 NVM Specific Namespace Data 00:07:52.121 =========================== 00:07:52.121 Logical Block Storage Tag Mask: 0 00:07:52.121 Protection Information Capabilities: 00:07:52.121 16b Guard Protection Information Storage Tag Support: No 00:07:52.121 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:52.121 Storage Tag Check Read Support: No 00:07:52.121 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.121 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.121 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.121 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.121 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.121 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.122 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.122 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:52.122 00:07:52.122 real 0m1.304s 00:07:52.122 user 0m0.478s 00:07:52.122 sys 0m0.595s 00:07:52.122 17:38:15 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.122 ************************************ 00:07:52.122 END TEST nvme_identify 00:07:52.122 ************************************ 00:07:52.122 17:38:15 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:52.122 17:38:15 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:52.122 17:38:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.122 17:38:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.122 17:38:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.122 ************************************ 00:07:52.122 START TEST nvme_perf 00:07:52.122 ************************************ 00:07:52.122 17:38:15 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:52.122 17:38:15 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:53.502 Initializing NVMe Controllers 00:07:53.502 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:53.503 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:53.503 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:53.503 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:53.503 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:53.503 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:53.503 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:53.503 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:53.503 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:53.503 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:53.503 Initialization complete. Launching workers. 00:07:53.503 ======================================================== 00:07:53.503 Latency(us) 00:07:53.503 Device Information : IOPS MiB/s Average min max 00:07:53.503 PCIE (0000:00:10.0) NSID 1 from core 0: 11372.34 133.27 11277.67 8660.23 35446.33 00:07:53.503 PCIE (0000:00:11.0) NSID 1 from core 0: 11372.34 133.27 11265.83 8599.78 34093.10 00:07:53.503 PCIE (0000:00:13.0) NSID 1 from core 0: 11372.34 133.27 11250.74 8634.04 33751.07 00:07:53.503 PCIE (0000:00:12.0) NSID 1 from core 0: 11372.34 133.27 11235.29 8849.86 32520.89 00:07:53.503 PCIE (0000:00:12.0) NSID 2 from core 0: 11372.34 133.27 11219.98 8884.57 31243.33 00:07:53.503 PCIE (0000:00:12.0) NSID 3 from core 0: 11436.23 134.02 11142.08 8877.63 23117.70 00:07:53.503 ======================================================== 00:07:53.503 Total : 68297.91 800.37 11231.85 8599.78 35446.33 00:07:53.503 00:07:53.503 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:53.503 ================================================================================= 00:07:53.503 1.00000% : 9124.628us 00:07:53.503 10.00000% : 9628.751us 00:07:53.503 25.00000% : 10032.049us 00:07:53.503 50.00000% : 10536.172us 00:07:53.503 75.00000% : 11494.006us 00:07:53.503 90.00000% : 13712.148us 00:07:53.503 95.00000% : 15526.991us 00:07:53.503 98.00000% : 18249.255us 00:07:53.503 99.00000% : 26617.698us 00:07:53.503 99.50000% : 34078.720us 00:07:53.503 99.90000% : 35288.615us 00:07:53.503 99.99000% : 35490.265us 00:07:53.503 99.99900% : 35490.265us 00:07:53.503 99.99990% : 35490.265us 00:07:53.503 99.99999% : 35490.265us 00:07:53.503 00:07:53.503 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:53.503 ================================================================================= 00:07:53.503 1.00000% : 9124.628us 00:07:53.503 10.00000% : 9628.751us 00:07:53.503 25.00000% : 10032.049us 00:07:53.503 50.00000% : 10536.172us 00:07:53.503 75.00000% : 11443.594us 00:07:53.503 90.00000% : 13611.323us 00:07:53.503 95.00000% : 15526.991us 00:07:53.503 98.00000% : 18955.028us 00:07:53.503 99.00000% : 25206.154us 00:07:53.503 99.50000% : 32868.825us 00:07:53.503 99.90000% : 33877.071us 00:07:53.503 99.99000% : 34078.720us 00:07:53.503 99.99900% : 34280.369us 00:07:53.503 99.99990% : 34280.369us 00:07:53.503 99.99999% : 34280.369us 00:07:53.503 00:07:53.503 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:53.503 ================================================================================= 00:07:53.503 1.00000% : 9175.040us 00:07:53.503 10.00000% : 9628.751us 00:07:53.503 25.00000% : 9981.637us 00:07:53.503 50.00000% : 10485.760us 00:07:53.503 75.00000% : 11393.182us 00:07:53.503 90.00000% : 13611.323us 00:07:53.503 95.00000% : 15224.517us 00:07:53.503 98.00000% : 19660.800us 00:07:53.503 99.00000% : 24702.031us 00:07:53.503 99.50000% : 32465.526us 00:07:53.503 99.90000% : 33675.422us 00:07:53.503 99.99000% : 33877.071us 00:07:53.503 99.99900% : 33877.071us 00:07:53.503 99.99990% : 33877.071us 00:07:53.503 99.99999% : 33877.071us 00:07:53.503 00:07:53.503 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:53.503 ================================================================================= 00:07:53.503 1.00000% : 9225.452us 00:07:53.503 10.00000% : 9628.751us 00:07:53.503 25.00000% : 10032.049us 00:07:53.503 50.00000% : 10485.760us 00:07:53.503 75.00000% : 11393.182us 00:07:53.503 90.00000% : 13510.498us 00:07:53.503 95.00000% : 15325.342us 00:07:53.503 98.00000% : 19862.449us 00:07:53.503 99.00000% : 23391.311us 00:07:53.503 99.50000% : 31255.631us 00:07:53.503 99.90000% : 32263.877us 00:07:53.503 99.99000% : 32667.175us 00:07:53.503 99.99900% : 32667.175us 00:07:53.503 99.99990% : 32667.175us 00:07:53.503 99.99999% : 32667.175us 00:07:53.503 00:07:53.503 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:53.503 ================================================================================= 00:07:53.503 1.00000% : 9225.452us 00:07:53.503 10.00000% : 9679.163us 00:07:53.503 25.00000% : 10032.049us 00:07:53.503 50.00000% : 10536.172us 00:07:53.503 75.00000% : 11393.182us 00:07:53.503 90.00000% : 13712.148us 00:07:53.503 95.00000% : 15224.517us 00:07:53.503 98.00000% : 18047.606us 00:07:53.503 99.00000% : 22080.591us 00:07:53.503 99.50000% : 30045.735us 00:07:53.503 99.90000% : 31053.982us 00:07:53.503 99.99000% : 31255.631us 00:07:53.503 99.99900% : 31255.631us 00:07:53.503 99.99990% : 31255.631us 00:07:53.503 99.99999% : 31255.631us 00:07:53.503 00:07:53.503 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:53.503 ================================================================================= 00:07:53.503 1.00000% : 9225.452us 00:07:53.503 10.00000% : 9679.163us 00:07:53.503 25.00000% : 10032.049us 00:07:53.503 50.00000% : 10536.172us 00:07:53.503 75.00000% : 11443.594us 00:07:53.503 90.00000% : 13812.972us 00:07:53.503 95.00000% : 15123.692us 00:07:53.503 98.00000% : 17341.834us 00:07:53.503 99.00000% : 19358.326us 00:07:53.503 99.50000% : 21778.117us 00:07:53.503 99.90000% : 22887.188us 00:07:53.503 99.99000% : 23189.662us 00:07:53.503 99.99900% : 23189.662us 00:07:53.503 99.99990% : 23189.662us 00:07:53.503 99.99999% : 23189.662us 00:07:53.503 00:07:53.503 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:53.503 ============================================================================== 00:07:53.503 Range in us Cumulative IO count 00:07:53.503 8620.505 - 8670.917: 0.0263% ( 3) 00:07:53.503 8670.917 - 8721.329: 0.0439% ( 2) 00:07:53.503 8721.329 - 8771.742: 0.0702% ( 3) 00:07:53.503 8771.742 - 8822.154: 0.1053% ( 4) 00:07:53.503 8822.154 - 8872.566: 0.1317% ( 3) 00:07:53.503 8872.566 - 8922.978: 0.2195% ( 10) 00:07:53.503 8922.978 - 8973.391: 0.3950% ( 20) 00:07:53.503 8973.391 - 9023.803: 0.5881% ( 22) 00:07:53.503 9023.803 - 9074.215: 0.9919% ( 46) 00:07:53.503 9074.215 - 9124.628: 1.4221% ( 49) 00:07:53.503 9124.628 - 9175.040: 1.9312% ( 58) 00:07:53.503 9175.040 - 9225.452: 2.4930% ( 64) 00:07:53.503 9225.452 - 9275.865: 3.3445% ( 97) 00:07:53.503 9275.865 - 9326.277: 4.2047% ( 98) 00:07:53.503 9326.277 - 9376.689: 5.1264% ( 105) 00:07:53.503 9376.689 - 9427.102: 6.1183% ( 113) 00:07:53.503 9427.102 - 9477.514: 7.4087% ( 147) 00:07:53.503 9477.514 - 9527.926: 8.6201% ( 138) 00:07:53.503 9527.926 - 9578.338: 9.9895% ( 156) 00:07:53.503 9578.338 - 9628.751: 11.5256% ( 175) 00:07:53.503 9628.751 - 9679.163: 13.0179% ( 170) 00:07:53.503 9679.163 - 9729.575: 14.6243% ( 183) 00:07:53.503 9729.575 - 9779.988: 16.7047% ( 237) 00:07:53.503 9779.988 - 9830.400: 18.6534% ( 222) 00:07:53.503 9830.400 - 9880.812: 20.5758% ( 219) 00:07:53.503 9880.812 - 9931.225: 22.6387% ( 235) 00:07:53.503 9931.225 - 9981.637: 24.9912% ( 268) 00:07:53.503 9981.637 - 10032.049: 27.4666% ( 282) 00:07:53.503 10032.049 - 10082.462: 29.9245% ( 280) 00:07:53.503 10082.462 - 10132.874: 32.3034% ( 271) 00:07:53.503 10132.874 - 10183.286: 35.0421% ( 312) 00:07:53.503 10183.286 - 10233.698: 37.4034% ( 269) 00:07:53.503 10233.698 - 10284.111: 39.8964% ( 284) 00:07:53.503 10284.111 - 10334.523: 42.3455% ( 279) 00:07:53.503 10334.523 - 10384.935: 44.6190% ( 259) 00:07:53.503 10384.935 - 10435.348: 46.9277% ( 263) 00:07:53.503 10435.348 - 10485.760: 49.1836% ( 257) 00:07:53.503 10485.760 - 10536.172: 51.5098% ( 265) 00:07:53.503 10536.172 - 10586.585: 53.5815% ( 236) 00:07:53.503 10586.585 - 10636.997: 55.8111% ( 254) 00:07:53.503 10636.997 - 10687.409: 57.9442% ( 243) 00:07:53.503 10687.409 - 10737.822: 59.7261% ( 203) 00:07:53.503 10737.822 - 10788.234: 61.4291% ( 194) 00:07:53.503 10788.234 - 10838.646: 63.0618% ( 186) 00:07:53.503 10838.646 - 10889.058: 64.5541% ( 170) 00:07:53.503 10889.058 - 10939.471: 65.9059% ( 154) 00:07:53.503 10939.471 - 10989.883: 67.1787% ( 145) 00:07:53.503 10989.883 - 11040.295: 68.3111% ( 129) 00:07:53.503 11040.295 - 11090.708: 69.2240% ( 104) 00:07:53.503 11090.708 - 11141.120: 70.0843% ( 98) 00:07:53.503 11141.120 - 11191.532: 70.8392% ( 86) 00:07:53.503 11191.532 - 11241.945: 71.6468% ( 92) 00:07:53.503 11241.945 - 11292.357: 72.3227% ( 77) 00:07:53.503 11292.357 - 11342.769: 73.2268% ( 103) 00:07:53.503 11342.769 - 11393.182: 73.8150% ( 67) 00:07:53.503 11393.182 - 11443.594: 74.5172% ( 80) 00:07:53.503 11443.594 - 11494.006: 75.1053% ( 67) 00:07:53.503 11494.006 - 11544.418: 75.7637% ( 75) 00:07:53.503 11544.418 - 11594.831: 76.3167% ( 63) 00:07:53.503 11594.831 - 11645.243: 76.9048% ( 67) 00:07:53.503 11645.243 - 11695.655: 77.3525% ( 51) 00:07:53.503 11695.655 - 11746.068: 77.8178% ( 53) 00:07:53.503 11746.068 - 11796.480: 78.3445% ( 60) 00:07:53.504 11796.480 - 11846.892: 78.7131% ( 42) 00:07:53.504 11846.892 - 11897.305: 79.1433% ( 49) 00:07:53.504 11897.305 - 11947.717: 79.5207% ( 43) 00:07:53.504 11947.717 - 11998.129: 79.8631% ( 39) 00:07:53.504 11998.129 - 12048.542: 80.3020% ( 50) 00:07:53.504 12048.542 - 12098.954: 80.6443% ( 39) 00:07:53.504 12098.954 - 12149.366: 80.9691% ( 37) 00:07:53.504 12149.366 - 12199.778: 81.4168% ( 51) 00:07:53.504 12199.778 - 12250.191: 81.7679% ( 40) 00:07:53.504 12250.191 - 12300.603: 82.0927% ( 37) 00:07:53.504 12300.603 - 12351.015: 82.5316% ( 50) 00:07:53.504 12351.015 - 12401.428: 82.8827% ( 40) 00:07:53.504 12401.428 - 12451.840: 83.2426% ( 41) 00:07:53.504 12451.840 - 12502.252: 83.5235% ( 32) 00:07:53.504 12502.252 - 12552.665: 83.9624% ( 50) 00:07:53.504 12552.665 - 12603.077: 84.2960% ( 38) 00:07:53.504 12603.077 - 12653.489: 84.6998% ( 46) 00:07:53.504 12653.489 - 12703.902: 84.9807% ( 32) 00:07:53.504 12703.902 - 12754.314: 85.3055% ( 37) 00:07:53.504 12754.314 - 12804.726: 85.6039% ( 34) 00:07:53.504 12804.726 - 12855.138: 85.9112% ( 35) 00:07:53.504 12855.138 - 12905.551: 86.1570% ( 28) 00:07:53.504 12905.551 - 13006.375: 86.6485% ( 56) 00:07:53.504 13006.375 - 13107.200: 87.2454% ( 68) 00:07:53.504 13107.200 - 13208.025: 87.8160% ( 65) 00:07:53.504 13208.025 - 13308.849: 88.3778% ( 64) 00:07:53.504 13308.849 - 13409.674: 88.8606% ( 55) 00:07:53.504 13409.674 - 13510.498: 89.2644% ( 46) 00:07:53.504 13510.498 - 13611.323: 89.6682% ( 46) 00:07:53.504 13611.323 - 13712.148: 90.0369% ( 42) 00:07:53.504 13712.148 - 13812.972: 90.3090% ( 31) 00:07:53.504 13812.972 - 13913.797: 90.6601% ( 40) 00:07:53.504 13913.797 - 14014.622: 90.9849% ( 37) 00:07:53.504 14014.622 - 14115.446: 91.3097% ( 37) 00:07:53.504 14115.446 - 14216.271: 91.6608% ( 40) 00:07:53.504 14216.271 - 14317.095: 91.9329% ( 31) 00:07:53.504 14317.095 - 14417.920: 92.0997% ( 19) 00:07:53.504 14417.920 - 14518.745: 92.3367% ( 27) 00:07:53.504 14518.745 - 14619.569: 92.5913% ( 29) 00:07:53.504 14619.569 - 14720.394: 92.8985% ( 35) 00:07:53.504 14720.394 - 14821.218: 93.1619% ( 30) 00:07:53.504 14821.218 - 14922.043: 93.4867% ( 37) 00:07:53.504 14922.043 - 15022.868: 93.6973% ( 24) 00:07:53.504 15022.868 - 15123.692: 93.9607% ( 30) 00:07:53.504 15123.692 - 15224.517: 94.2855% ( 37) 00:07:53.504 15224.517 - 15325.342: 94.6278% ( 39) 00:07:53.504 15325.342 - 15426.166: 94.9702% ( 39) 00:07:53.504 15426.166 - 15526.991: 95.2598% ( 33) 00:07:53.504 15526.991 - 15627.815: 95.5758% ( 36) 00:07:53.504 15627.815 - 15728.640: 95.8655% ( 33) 00:07:53.504 15728.640 - 15829.465: 96.0323% ( 19) 00:07:53.504 15829.465 - 15930.289: 96.3044% ( 31) 00:07:53.504 15930.289 - 16031.114: 96.4975% ( 22) 00:07:53.504 16031.114 - 16131.938: 96.6994% ( 23) 00:07:53.504 16131.938 - 16232.763: 96.9101% ( 24) 00:07:53.504 16232.763 - 16333.588: 97.0242% ( 13) 00:07:53.504 16333.588 - 16434.412: 97.1559% ( 15) 00:07:53.504 16434.412 - 16535.237: 97.1910% ( 4) 00:07:53.504 16837.711 - 16938.535: 97.2349% ( 5) 00:07:53.504 16938.535 - 17039.360: 97.2700% ( 4) 00:07:53.504 17039.360 - 17140.185: 97.3051% ( 4) 00:07:53.504 17140.185 - 17241.009: 97.3227% ( 2) 00:07:53.504 17241.009 - 17341.834: 97.3490% ( 3) 00:07:53.504 17341.834 - 17442.658: 97.3929% ( 5) 00:07:53.504 17442.658 - 17543.483: 97.4544% ( 7) 00:07:53.504 17543.483 - 17644.308: 97.4807% ( 3) 00:07:53.504 17644.308 - 17745.132: 97.5948% ( 13) 00:07:53.504 17745.132 - 17845.957: 97.6562% ( 7) 00:07:53.504 17845.957 - 17946.782: 97.7616% ( 12) 00:07:53.504 17946.782 - 18047.606: 97.8406% ( 9) 00:07:53.504 18047.606 - 18148.431: 97.9371% ( 11) 00:07:53.504 18148.431 - 18249.255: 98.0337% ( 11) 00:07:53.504 18249.255 - 18350.080: 98.0776% ( 5) 00:07:53.504 18350.080 - 18450.905: 98.1215% ( 5) 00:07:53.504 18450.905 - 18551.729: 98.1654% ( 5) 00:07:53.504 18551.729 - 18652.554: 98.2093% ( 5) 00:07:53.504 18652.554 - 18753.378: 98.2707% ( 7) 00:07:53.504 18753.378 - 18854.203: 98.3146% ( 5) 00:07:53.504 19156.677 - 19257.502: 98.3322% ( 2) 00:07:53.504 19257.502 - 19358.326: 98.3761% ( 5) 00:07:53.504 19358.326 - 19459.151: 98.4463% ( 8) 00:07:53.504 19459.151 - 19559.975: 98.4989% ( 6) 00:07:53.504 19559.975 - 19660.800: 98.5428% ( 5) 00:07:53.504 19660.800 - 19761.625: 98.5867% ( 5) 00:07:53.504 19761.625 - 19862.449: 98.6482% ( 7) 00:07:53.504 19862.449 - 19963.274: 98.6833% ( 4) 00:07:53.504 19963.274 - 20064.098: 98.7447% ( 7) 00:07:53.504 20064.098 - 20164.923: 98.7623% ( 2) 00:07:53.504 20164.923 - 20265.748: 98.8150% ( 6) 00:07:53.504 20265.748 - 20366.572: 98.8764% ( 7) 00:07:53.504 26012.751 - 26214.400: 98.9291% ( 6) 00:07:53.504 26214.400 - 26416.049: 98.9817% ( 6) 00:07:53.504 26416.049 - 26617.698: 99.0607% ( 9) 00:07:53.504 26617.698 - 26819.348: 99.1310% ( 8) 00:07:53.504 26819.348 - 27020.997: 99.2012% ( 8) 00:07:53.504 27020.997 - 27222.646: 99.2714% ( 8) 00:07:53.504 27222.646 - 27424.295: 99.3504% ( 9) 00:07:53.504 27424.295 - 27625.945: 99.4206% ( 8) 00:07:53.504 27625.945 - 27827.594: 99.4382% ( 2) 00:07:53.504 33675.422 - 33877.071: 99.4558% ( 2) 00:07:53.504 33877.071 - 34078.720: 99.5260% ( 8) 00:07:53.504 34078.720 - 34280.369: 99.5874% ( 7) 00:07:53.504 34280.369 - 34482.018: 99.6577% ( 8) 00:07:53.504 34482.018 - 34683.668: 99.7279% ( 8) 00:07:53.504 34683.668 - 34885.317: 99.8069% ( 9) 00:07:53.504 34885.317 - 35086.966: 99.8771% ( 8) 00:07:53.504 35086.966 - 35288.615: 99.9473% ( 8) 00:07:53.504 35288.615 - 35490.265: 100.0000% ( 6) 00:07:53.504 00:07:53.504 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:53.504 ============================================================================== 00:07:53.504 Range in us Cumulative IO count 00:07:53.504 8570.092 - 8620.505: 0.0263% ( 3) 00:07:53.504 8620.505 - 8670.917: 0.0527% ( 3) 00:07:53.504 8670.917 - 8721.329: 0.1580% ( 12) 00:07:53.504 8721.329 - 8771.742: 0.2546% ( 11) 00:07:53.504 8771.742 - 8822.154: 0.3423% ( 10) 00:07:53.504 8822.154 - 8872.566: 0.3950% ( 6) 00:07:53.504 8872.566 - 8922.978: 0.4477% ( 6) 00:07:53.504 8922.978 - 8973.391: 0.4828% ( 4) 00:07:53.504 8973.391 - 9023.803: 0.6232% ( 16) 00:07:53.504 9023.803 - 9074.215: 0.8339% ( 24) 00:07:53.504 9074.215 - 9124.628: 1.0973% ( 30) 00:07:53.504 9124.628 - 9175.040: 1.5186% ( 48) 00:07:53.504 9175.040 - 9225.452: 2.1243% ( 69) 00:07:53.504 9225.452 - 9275.865: 2.8178% ( 79) 00:07:53.504 9275.865 - 9326.277: 3.6254% ( 92) 00:07:53.504 9326.277 - 9376.689: 4.5119% ( 101) 00:07:53.504 9376.689 - 9427.102: 5.3283% ( 93) 00:07:53.504 9427.102 - 9477.514: 6.3378% ( 115) 00:07:53.504 9477.514 - 9527.926: 7.4614% ( 128) 00:07:53.504 9527.926 - 9578.338: 8.7605% ( 148) 00:07:53.504 9578.338 - 9628.751: 10.2001% ( 164) 00:07:53.504 9628.751 - 9679.163: 11.6924% ( 170) 00:07:53.504 9679.163 - 9729.575: 13.4568% ( 201) 00:07:53.504 9729.575 - 9779.988: 15.5548% ( 239) 00:07:53.504 9779.988 - 9830.400: 17.7317% ( 248) 00:07:53.504 9830.400 - 9880.812: 19.9877% ( 257) 00:07:53.504 9880.812 - 9931.225: 22.1383% ( 245) 00:07:53.504 9931.225 - 9981.637: 24.3855% ( 256) 00:07:53.504 9981.637 - 10032.049: 26.7293% ( 267) 00:07:53.504 10032.049 - 10082.462: 29.2310% ( 285) 00:07:53.504 10082.462 - 10132.874: 31.5660% ( 266) 00:07:53.504 10132.874 - 10183.286: 34.0590% ( 284) 00:07:53.504 10183.286 - 10233.698: 36.7275% ( 304) 00:07:53.504 10233.698 - 10284.111: 39.4312% ( 308) 00:07:53.504 10284.111 - 10334.523: 42.0295% ( 296) 00:07:53.504 10334.523 - 10384.935: 44.8034% ( 316) 00:07:53.504 10384.935 - 10435.348: 47.2086% ( 274) 00:07:53.504 10435.348 - 10485.760: 49.6928% ( 283) 00:07:53.504 10485.760 - 10536.172: 51.9751% ( 260) 00:07:53.504 10536.172 - 10586.585: 54.1696% ( 250) 00:07:53.504 10586.585 - 10636.997: 56.2149% ( 233) 00:07:53.504 10636.997 - 10687.409: 58.1110% ( 216) 00:07:53.504 10687.409 - 10737.822: 59.9631% ( 211) 00:07:53.504 10737.822 - 10788.234: 61.6836% ( 196) 00:07:53.504 10788.234 - 10838.646: 63.4217% ( 198) 00:07:53.504 10838.646 - 10889.058: 65.0018% ( 180) 00:07:53.504 10889.058 - 10939.471: 66.4501% ( 165) 00:07:53.504 10939.471 - 10989.883: 67.7493% ( 148) 00:07:53.504 10989.883 - 11040.295: 68.8992% ( 131) 00:07:53.504 11040.295 - 11090.708: 70.0843% ( 135) 00:07:53.504 11090.708 - 11141.120: 71.0499% ( 110) 00:07:53.504 11141.120 - 11191.532: 71.8838% ( 95) 00:07:53.504 11191.532 - 11241.945: 72.6826% ( 91) 00:07:53.504 11241.945 - 11292.357: 73.4024% ( 82) 00:07:53.504 11292.357 - 11342.769: 74.0432% ( 73) 00:07:53.504 11342.769 - 11393.182: 74.5787% ( 61) 00:07:53.504 11393.182 - 11443.594: 75.1668% ( 67) 00:07:53.504 11443.594 - 11494.006: 75.6759% ( 58) 00:07:53.504 11494.006 - 11544.418: 76.2114% ( 61) 00:07:53.504 11544.418 - 11594.831: 76.6239% ( 47) 00:07:53.504 11594.831 - 11645.243: 76.9312% ( 35) 00:07:53.504 11645.243 - 11695.655: 77.2735% ( 39) 00:07:53.504 11695.655 - 11746.068: 77.6685% ( 45) 00:07:53.504 11746.068 - 11796.480: 78.0197% ( 40) 00:07:53.505 11796.480 - 11846.892: 78.3357% ( 36) 00:07:53.505 11846.892 - 11897.305: 78.6341% ( 34) 00:07:53.505 11897.305 - 11947.717: 78.8624% ( 26) 00:07:53.505 11947.717 - 11998.129: 79.1433% ( 32) 00:07:53.505 11998.129 - 12048.542: 79.4505% ( 35) 00:07:53.505 12048.542 - 12098.954: 79.7226% ( 31) 00:07:53.505 12098.954 - 12149.366: 80.0211% ( 34) 00:07:53.505 12149.366 - 12199.778: 80.3459% ( 37) 00:07:53.505 12199.778 - 12250.191: 80.6882% ( 39) 00:07:53.505 12250.191 - 12300.603: 81.0832% ( 45) 00:07:53.505 12300.603 - 12351.015: 81.4343% ( 40) 00:07:53.505 12351.015 - 12401.428: 81.8206% ( 44) 00:07:53.505 12401.428 - 12451.840: 82.2419% ( 48) 00:07:53.505 12451.840 - 12502.252: 82.6282% ( 44) 00:07:53.505 12502.252 - 12552.665: 83.0144% ( 44) 00:07:53.505 12552.665 - 12603.077: 83.4533% ( 50) 00:07:53.505 12603.077 - 12653.489: 83.8746% ( 48) 00:07:53.505 12653.489 - 12703.902: 84.3223% ( 51) 00:07:53.505 12703.902 - 12754.314: 84.7173% ( 45) 00:07:53.505 12754.314 - 12804.726: 85.0860% ( 42) 00:07:53.505 12804.726 - 12855.138: 85.4898% ( 46) 00:07:53.505 12855.138 - 12905.551: 85.8848% ( 45) 00:07:53.505 12905.551 - 13006.375: 86.7100% ( 94) 00:07:53.505 13006.375 - 13107.200: 87.4561% ( 85) 00:07:53.505 13107.200 - 13208.025: 88.1847% ( 83) 00:07:53.505 13208.025 - 13308.849: 88.8255% ( 73) 00:07:53.505 13308.849 - 13409.674: 89.4487% ( 71) 00:07:53.505 13409.674 - 13510.498: 89.9052% ( 52) 00:07:53.505 13510.498 - 13611.323: 90.2651% ( 41) 00:07:53.505 13611.323 - 13712.148: 90.5899% ( 37) 00:07:53.505 13712.148 - 13812.972: 90.9849% ( 45) 00:07:53.505 13812.972 - 13913.797: 91.3624% ( 43) 00:07:53.505 13913.797 - 14014.622: 91.6520% ( 33) 00:07:53.505 14014.622 - 14115.446: 91.8890% ( 27) 00:07:53.505 14115.446 - 14216.271: 92.0558% ( 19) 00:07:53.505 14216.271 - 14317.095: 92.2841% ( 26) 00:07:53.505 14317.095 - 14417.920: 92.4947% ( 24) 00:07:53.505 14417.920 - 14518.745: 92.7054% ( 24) 00:07:53.505 14518.745 - 14619.569: 92.8897% ( 21) 00:07:53.505 14619.569 - 14720.394: 93.0741% ( 21) 00:07:53.505 14720.394 - 14821.218: 93.2058% ( 15) 00:07:53.505 14821.218 - 14922.043: 93.3199% ( 13) 00:07:53.505 14922.043 - 15022.868: 93.4691% ( 17) 00:07:53.505 15022.868 - 15123.692: 93.7324% ( 30) 00:07:53.505 15123.692 - 15224.517: 94.0309% ( 34) 00:07:53.505 15224.517 - 15325.342: 94.4084% ( 43) 00:07:53.505 15325.342 - 15426.166: 94.7858% ( 43) 00:07:53.505 15426.166 - 15526.991: 95.1721% ( 44) 00:07:53.505 15526.991 - 15627.815: 95.5583% ( 44) 00:07:53.505 15627.815 - 15728.640: 95.8567% ( 34) 00:07:53.505 15728.640 - 15829.465: 96.1728% ( 36) 00:07:53.505 15829.465 - 15930.289: 96.4888% ( 36) 00:07:53.505 15930.289 - 16031.114: 96.7433% ( 29) 00:07:53.505 16031.114 - 16131.938: 96.9277% ( 21) 00:07:53.505 16131.938 - 16232.763: 97.0418% ( 13) 00:07:53.505 16232.763 - 16333.588: 97.0769% ( 4) 00:07:53.505 16333.588 - 16434.412: 97.1032% ( 3) 00:07:53.505 16434.412 - 16535.237: 97.1296% ( 3) 00:07:53.505 16535.237 - 16636.062: 97.1735% ( 5) 00:07:53.505 16636.062 - 16736.886: 97.2700% ( 11) 00:07:53.505 16736.886 - 16837.711: 97.3227% ( 6) 00:07:53.505 16837.711 - 16938.535: 97.3841% ( 7) 00:07:53.505 16938.535 - 17039.360: 97.4280% ( 5) 00:07:53.505 17039.360 - 17140.185: 97.4895% ( 7) 00:07:53.505 17140.185 - 17241.009: 97.5509% ( 7) 00:07:53.505 17241.009 - 17341.834: 97.6036% ( 6) 00:07:53.505 17341.834 - 17442.658: 97.6562% ( 6) 00:07:53.505 17442.658 - 17543.483: 97.7177% ( 7) 00:07:53.505 17543.483 - 17644.308: 97.7528% ( 4) 00:07:53.505 18450.905 - 18551.729: 97.7879% ( 4) 00:07:53.505 18551.729 - 18652.554: 97.8494% ( 7) 00:07:53.505 18652.554 - 18753.378: 97.9020% ( 6) 00:07:53.505 18753.378 - 18854.203: 97.9635% ( 7) 00:07:53.505 18854.203 - 18955.028: 98.0162% ( 6) 00:07:53.505 18955.028 - 19055.852: 98.1127% ( 11) 00:07:53.505 19055.852 - 19156.677: 98.2444% ( 15) 00:07:53.505 19156.677 - 19257.502: 98.3585% ( 13) 00:07:53.505 19257.502 - 19358.326: 98.4814% ( 14) 00:07:53.505 19358.326 - 19459.151: 98.5955% ( 13) 00:07:53.505 19459.151 - 19559.975: 98.6833% ( 10) 00:07:53.505 19559.975 - 19660.800: 98.7447% ( 7) 00:07:53.505 19660.800 - 19761.625: 98.8062% ( 7) 00:07:53.505 19761.625 - 19862.449: 98.8764% ( 8) 00:07:53.505 24802.855 - 24903.680: 98.9115% ( 4) 00:07:53.505 24903.680 - 25004.505: 98.9466% ( 4) 00:07:53.505 25004.505 - 25105.329: 98.9817% ( 4) 00:07:53.505 25105.329 - 25206.154: 99.0169% ( 4) 00:07:53.505 25206.154 - 25306.978: 99.0520% ( 4) 00:07:53.505 25306.978 - 25407.803: 99.0959% ( 5) 00:07:53.505 25407.803 - 25508.628: 99.1310% ( 4) 00:07:53.505 25508.628 - 25609.452: 99.1661% ( 4) 00:07:53.505 25609.452 - 25710.277: 99.2012% ( 4) 00:07:53.505 25710.277 - 25811.102: 99.2363% ( 4) 00:07:53.505 25811.102 - 26012.751: 99.3153% ( 9) 00:07:53.505 26012.751 - 26214.400: 99.3855% ( 8) 00:07:53.505 26214.400 - 26416.049: 99.4382% ( 6) 00:07:53.505 32465.526 - 32667.175: 99.4733% ( 4) 00:07:53.505 32667.175 - 32868.825: 99.5435% ( 8) 00:07:53.505 32868.825 - 33070.474: 99.6138% ( 8) 00:07:53.505 33070.474 - 33272.123: 99.6928% ( 9) 00:07:53.505 33272.123 - 33473.772: 99.7630% ( 8) 00:07:53.505 33473.772 - 33675.422: 99.8332% ( 8) 00:07:53.505 33675.422 - 33877.071: 99.9122% ( 9) 00:07:53.505 33877.071 - 34078.720: 99.9912% ( 9) 00:07:53.505 34078.720 - 34280.369: 100.0000% ( 1) 00:07:53.505 00:07:53.505 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:53.505 ============================================================================== 00:07:53.505 Range in us Cumulative IO count 00:07:53.505 8620.505 - 8670.917: 0.0263% ( 3) 00:07:53.505 8670.917 - 8721.329: 0.0614% ( 4) 00:07:53.505 8721.329 - 8771.742: 0.0878% ( 3) 00:07:53.505 8771.742 - 8822.154: 0.1141% ( 3) 00:07:53.505 8822.154 - 8872.566: 0.1404% ( 3) 00:07:53.505 8872.566 - 8922.978: 0.1931% ( 6) 00:07:53.505 8922.978 - 8973.391: 0.2633% ( 8) 00:07:53.505 8973.391 - 9023.803: 0.3599% ( 11) 00:07:53.505 9023.803 - 9074.215: 0.5794% ( 25) 00:07:53.505 9074.215 - 9124.628: 0.9041% ( 37) 00:07:53.505 9124.628 - 9175.040: 1.1587% ( 29) 00:07:53.505 9175.040 - 9225.452: 1.6327% ( 54) 00:07:53.505 9225.452 - 9275.865: 2.2121% ( 66) 00:07:53.505 9275.865 - 9326.277: 3.0460% ( 95) 00:07:53.505 9326.277 - 9376.689: 3.8185% ( 88) 00:07:53.505 9376.689 - 9427.102: 4.9333% ( 127) 00:07:53.505 9427.102 - 9477.514: 6.1271% ( 136) 00:07:53.505 9477.514 - 9527.926: 7.5579% ( 163) 00:07:53.505 9527.926 - 9578.338: 9.0239% ( 167) 00:07:53.505 9578.338 - 9628.751: 10.5600% ( 175) 00:07:53.505 9628.751 - 9679.163: 12.2454% ( 192) 00:07:53.505 9679.163 - 9729.575: 13.9747% ( 197) 00:07:53.505 9729.575 - 9779.988: 15.8620% ( 215) 00:07:53.505 9779.988 - 9830.400: 17.8459% ( 226) 00:07:53.505 9830.400 - 9880.812: 20.0843% ( 255) 00:07:53.505 9880.812 - 9931.225: 22.6036% ( 287) 00:07:53.505 9931.225 - 9981.637: 25.0176% ( 275) 00:07:53.505 9981.637 - 10032.049: 27.6159% ( 296) 00:07:53.505 10032.049 - 10082.462: 30.2493% ( 300) 00:07:53.505 10082.462 - 10132.874: 32.8827% ( 300) 00:07:53.505 10132.874 - 10183.286: 35.4898% ( 297) 00:07:53.505 10183.286 - 10233.698: 38.2022% ( 309) 00:07:53.505 10233.698 - 10284.111: 40.7128% ( 286) 00:07:53.505 10284.111 - 10334.523: 43.1443% ( 277) 00:07:53.505 10334.523 - 10384.935: 45.6022% ( 280) 00:07:53.505 10384.935 - 10435.348: 48.0249% ( 276) 00:07:53.505 10435.348 - 10485.760: 50.2809% ( 257) 00:07:53.505 10485.760 - 10536.172: 52.5105% ( 254) 00:07:53.505 10536.172 - 10586.585: 54.6875% ( 248) 00:07:53.505 10586.585 - 10636.997: 56.7416% ( 234) 00:07:53.505 10636.997 - 10687.409: 58.6289% ( 215) 00:07:53.505 10687.409 - 10737.822: 60.4284% ( 205) 00:07:53.505 10737.822 - 10788.234: 62.1664% ( 198) 00:07:53.505 10788.234 - 10838.646: 63.7202% ( 177) 00:07:53.505 10838.646 - 10889.058: 65.2651% ( 176) 00:07:53.505 10889.058 - 10939.471: 66.6871% ( 162) 00:07:53.505 10939.471 - 10989.883: 68.0653% ( 157) 00:07:53.505 10989.883 - 11040.295: 69.3118% ( 142) 00:07:53.505 11040.295 - 11090.708: 70.4968% ( 135) 00:07:53.505 11090.708 - 11141.120: 71.5590% ( 121) 00:07:53.505 11141.120 - 11191.532: 72.4456% ( 101) 00:07:53.505 11191.532 - 11241.945: 73.2883% ( 96) 00:07:53.505 11241.945 - 11292.357: 74.0169% ( 83) 00:07:53.505 11292.357 - 11342.769: 74.6225% ( 69) 00:07:53.505 11342.769 - 11393.182: 75.1668% ( 62) 00:07:53.505 11393.182 - 11443.594: 75.6847% ( 59) 00:07:53.505 11443.594 - 11494.006: 76.1587% ( 54) 00:07:53.505 11494.006 - 11544.418: 76.5098% ( 40) 00:07:53.505 11544.418 - 11594.831: 76.8697% ( 41) 00:07:53.505 11594.831 - 11645.243: 77.1770% ( 35) 00:07:53.505 11645.243 - 11695.655: 77.4754% ( 34) 00:07:53.505 11695.655 - 11746.068: 77.7563% ( 32) 00:07:53.505 11746.068 - 11796.480: 78.0109% ( 29) 00:07:53.505 11796.480 - 11846.892: 78.2654% ( 29) 00:07:53.505 11846.892 - 11897.305: 78.5112% ( 28) 00:07:53.505 11897.305 - 11947.717: 78.7307% ( 25) 00:07:53.505 11947.717 - 11998.129: 79.0204% ( 33) 00:07:53.505 11998.129 - 12048.542: 79.3276% ( 35) 00:07:53.505 12048.542 - 12098.954: 79.5734% ( 28) 00:07:53.505 12098.954 - 12149.366: 79.8016% ( 26) 00:07:53.506 12149.366 - 12199.778: 80.0298% ( 26) 00:07:53.506 12199.778 - 12250.191: 80.2581% ( 26) 00:07:53.506 12250.191 - 12300.603: 80.5302% ( 31) 00:07:53.506 12300.603 - 12351.015: 80.9077% ( 43) 00:07:53.506 12351.015 - 12401.428: 81.2676% ( 41) 00:07:53.506 12401.428 - 12451.840: 81.6889% ( 48) 00:07:53.506 12451.840 - 12502.252: 82.0664% ( 43) 00:07:53.506 12502.252 - 12552.665: 82.4877% ( 48) 00:07:53.506 12552.665 - 12603.077: 82.9091% ( 48) 00:07:53.506 12603.077 - 12653.489: 83.3216% ( 47) 00:07:53.506 12653.489 - 12703.902: 83.7518% ( 49) 00:07:53.506 12703.902 - 12754.314: 84.1907% ( 50) 00:07:53.506 12754.314 - 12804.726: 84.6120% ( 48) 00:07:53.506 12804.726 - 12855.138: 85.0246% ( 47) 00:07:53.506 12855.138 - 12905.551: 85.4986% ( 54) 00:07:53.506 12905.551 - 13006.375: 86.3325% ( 95) 00:07:53.506 13006.375 - 13107.200: 87.2191% ( 101) 00:07:53.506 13107.200 - 13208.025: 88.0355% ( 93) 00:07:53.506 13208.025 - 13308.849: 88.7202% ( 78) 00:07:53.506 13308.849 - 13409.674: 89.2205% ( 57) 00:07:53.506 13409.674 - 13510.498: 89.7296% ( 58) 00:07:53.506 13510.498 - 13611.323: 90.1949% ( 53) 00:07:53.506 13611.323 - 13712.148: 90.6777% ( 55) 00:07:53.506 13712.148 - 13812.972: 91.0463% ( 42) 00:07:53.506 13812.972 - 13913.797: 91.3185% ( 31) 00:07:53.506 13913.797 - 14014.622: 91.5906% ( 31) 00:07:53.506 14014.622 - 14115.446: 91.9154% ( 37) 00:07:53.506 14115.446 - 14216.271: 92.2138% ( 34) 00:07:53.506 14216.271 - 14317.095: 92.4947% ( 32) 00:07:53.506 14317.095 - 14417.920: 92.7669% ( 31) 00:07:53.506 14417.920 - 14518.745: 93.0565% ( 33) 00:07:53.506 14518.745 - 14619.569: 93.3550% ( 34) 00:07:53.506 14619.569 - 14720.394: 93.6622% ( 35) 00:07:53.506 14720.394 - 14821.218: 93.9782% ( 36) 00:07:53.506 14821.218 - 14922.043: 94.2591% ( 32) 00:07:53.506 14922.043 - 15022.868: 94.5137% ( 29) 00:07:53.506 15022.868 - 15123.692: 94.8473% ( 38) 00:07:53.506 15123.692 - 15224.517: 95.2159% ( 42) 00:07:53.506 15224.517 - 15325.342: 95.4442% ( 26) 00:07:53.506 15325.342 - 15426.166: 95.6987% ( 29) 00:07:53.506 15426.166 - 15526.991: 95.9533% ( 29) 00:07:53.506 15526.991 - 15627.815: 96.1903% ( 27) 00:07:53.506 15627.815 - 15728.640: 96.4537% ( 30) 00:07:53.506 15728.640 - 15829.465: 96.6731% ( 25) 00:07:53.506 15829.465 - 15930.289: 96.8926% ( 25) 00:07:53.506 15930.289 - 16031.114: 96.9979% ( 12) 00:07:53.506 16031.114 - 16131.938: 97.0330% ( 4) 00:07:53.506 16131.938 - 16232.763: 97.0681% ( 4) 00:07:53.506 16232.763 - 16333.588: 97.1032% ( 4) 00:07:53.506 16333.588 - 16434.412: 97.1559% ( 6) 00:07:53.506 16434.412 - 16535.237: 97.2525% ( 11) 00:07:53.506 16535.237 - 16636.062: 97.3227% ( 8) 00:07:53.506 16636.062 - 16736.886: 97.3841% ( 7) 00:07:53.506 16736.886 - 16837.711: 97.4368% ( 6) 00:07:53.506 16837.711 - 16938.535: 97.4982% ( 7) 00:07:53.506 16938.535 - 17039.360: 97.5509% ( 6) 00:07:53.506 17039.360 - 17140.185: 97.6124% ( 7) 00:07:53.506 17140.185 - 17241.009: 97.6650% ( 6) 00:07:53.506 17241.009 - 17341.834: 97.7177% ( 6) 00:07:53.506 17341.834 - 17442.658: 97.7528% ( 4) 00:07:53.506 19358.326 - 19459.151: 97.7967% ( 5) 00:07:53.506 19459.151 - 19559.975: 97.9196% ( 14) 00:07:53.506 19559.975 - 19660.800: 98.0337% ( 13) 00:07:53.506 19660.800 - 19761.625: 98.1742% ( 16) 00:07:53.506 19761.625 - 19862.449: 98.3146% ( 16) 00:07:53.506 19862.449 - 19963.274: 98.4287% ( 13) 00:07:53.506 19963.274 - 20064.098: 98.5253% ( 11) 00:07:53.506 20064.098 - 20164.923: 98.6482% ( 14) 00:07:53.506 20164.923 - 20265.748: 98.7535% ( 12) 00:07:53.506 20265.748 - 20366.572: 98.8588% ( 12) 00:07:53.506 20366.572 - 20467.397: 98.8764% ( 2) 00:07:53.506 24298.732 - 24399.557: 98.9027% ( 3) 00:07:53.506 24399.557 - 24500.382: 98.9466% ( 5) 00:07:53.506 24500.382 - 24601.206: 98.9817% ( 4) 00:07:53.506 24601.206 - 24702.031: 99.0169% ( 4) 00:07:53.506 24702.031 - 24802.855: 99.0520% ( 4) 00:07:53.506 24802.855 - 24903.680: 99.0871% ( 4) 00:07:53.506 24903.680 - 25004.505: 99.1310% ( 5) 00:07:53.506 25004.505 - 25105.329: 99.1661% ( 4) 00:07:53.506 25105.329 - 25206.154: 99.2012% ( 4) 00:07:53.506 25206.154 - 25306.978: 99.2363% ( 4) 00:07:53.506 25306.978 - 25407.803: 99.2714% ( 4) 00:07:53.506 25407.803 - 25508.628: 99.3065% ( 4) 00:07:53.506 25508.628 - 25609.452: 99.3504% ( 5) 00:07:53.506 25609.452 - 25710.277: 99.3855% ( 4) 00:07:53.506 25710.277 - 25811.102: 99.4206% ( 4) 00:07:53.506 25811.102 - 26012.751: 99.4382% ( 2) 00:07:53.506 32062.228 - 32263.877: 99.4733% ( 4) 00:07:53.506 32263.877 - 32465.526: 99.5435% ( 8) 00:07:53.506 32465.526 - 32667.175: 99.6050% ( 7) 00:07:53.506 32667.175 - 32868.825: 99.6752% ( 8) 00:07:53.506 32868.825 - 33070.474: 99.7542% ( 9) 00:07:53.506 33070.474 - 33272.123: 99.8157% ( 7) 00:07:53.506 33272.123 - 33473.772: 99.8947% ( 9) 00:07:53.506 33473.772 - 33675.422: 99.9649% ( 8) 00:07:53.506 33675.422 - 33877.071: 100.0000% ( 4) 00:07:53.506 00:07:53.506 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:53.506 ============================================================================== 00:07:53.506 Range in us Cumulative IO count 00:07:53.506 8822.154 - 8872.566: 0.0263% ( 3) 00:07:53.506 8872.566 - 8922.978: 0.0614% ( 4) 00:07:53.506 8922.978 - 8973.391: 0.0966% ( 4) 00:07:53.506 8973.391 - 9023.803: 0.3511% ( 29) 00:07:53.506 9023.803 - 9074.215: 0.5179% ( 19) 00:07:53.506 9074.215 - 9124.628: 0.7022% ( 21) 00:07:53.506 9124.628 - 9175.040: 0.9831% ( 32) 00:07:53.506 9175.040 - 9225.452: 1.5011% ( 59) 00:07:53.506 9225.452 - 9275.865: 2.2209% ( 82) 00:07:53.506 9275.865 - 9326.277: 2.9319% ( 81) 00:07:53.506 9326.277 - 9376.689: 3.7570% ( 94) 00:07:53.506 9376.689 - 9427.102: 4.8016% ( 119) 00:07:53.506 9427.102 - 9477.514: 5.9603% ( 132) 00:07:53.506 9477.514 - 9527.926: 7.1629% ( 137) 00:07:53.506 9527.926 - 9578.338: 8.6201% ( 166) 00:07:53.506 9578.338 - 9628.751: 10.0685% ( 165) 00:07:53.506 9628.751 - 9679.163: 11.6924% ( 185) 00:07:53.506 9679.163 - 9729.575: 13.4831% ( 204) 00:07:53.506 9729.575 - 9779.988: 15.3441% ( 212) 00:07:53.506 9779.988 - 9830.400: 17.5913% ( 256) 00:07:53.506 9830.400 - 9880.812: 19.6278% ( 232) 00:07:53.506 9880.812 - 9931.225: 21.8311% ( 251) 00:07:53.506 9931.225 - 9981.637: 24.1134% ( 260) 00:07:53.506 9981.637 - 10032.049: 26.5713% ( 280) 00:07:53.506 10032.049 - 10082.462: 29.1871% ( 298) 00:07:53.506 10082.462 - 10132.874: 31.8206% ( 300) 00:07:53.506 10132.874 - 10183.286: 34.3574% ( 289) 00:07:53.506 10183.286 - 10233.698: 37.0260% ( 304) 00:07:53.506 10233.698 - 10284.111: 39.8174% ( 318) 00:07:53.506 10284.111 - 10334.523: 42.4596% ( 301) 00:07:53.506 10334.523 - 10384.935: 45.2511% ( 318) 00:07:53.506 10384.935 - 10435.348: 47.9196% ( 304) 00:07:53.506 10435.348 - 10485.760: 50.4477% ( 288) 00:07:53.506 10485.760 - 10536.172: 52.7300% ( 260) 00:07:53.506 10536.172 - 10586.585: 54.8631% ( 243) 00:07:53.506 10586.585 - 10636.997: 56.9698% ( 240) 00:07:53.506 10636.997 - 10687.409: 58.9800% ( 229) 00:07:53.506 10687.409 - 10737.822: 60.9902% ( 229) 00:07:53.506 10737.822 - 10788.234: 62.9213% ( 220) 00:07:53.506 10788.234 - 10838.646: 64.7384% ( 207) 00:07:53.506 10838.646 - 10889.058: 66.2921% ( 177) 00:07:53.506 10889.058 - 10939.471: 67.6440% ( 154) 00:07:53.506 10939.471 - 10989.883: 68.9343% ( 147) 00:07:53.506 10989.883 - 11040.295: 70.0930% ( 132) 00:07:53.506 11040.295 - 11090.708: 71.0762% ( 112) 00:07:53.506 11090.708 - 11141.120: 72.0769% ( 114) 00:07:53.506 11141.120 - 11191.532: 72.9547% ( 100) 00:07:53.506 11191.532 - 11241.945: 73.6482% ( 79) 00:07:53.506 11241.945 - 11292.357: 74.3153% ( 76) 00:07:53.506 11292.357 - 11342.769: 74.8771% ( 64) 00:07:53.506 11342.769 - 11393.182: 75.4652% ( 67) 00:07:53.506 11393.182 - 11443.594: 76.0709% ( 69) 00:07:53.506 11443.594 - 11494.006: 76.5362% ( 53) 00:07:53.506 11494.006 - 11544.418: 76.9575% ( 48) 00:07:53.506 11544.418 - 11594.831: 77.2999% ( 39) 00:07:53.506 11594.831 - 11645.243: 77.6071% ( 35) 00:07:53.506 11645.243 - 11695.655: 77.8441% ( 27) 00:07:53.506 11695.655 - 11746.068: 78.0284% ( 21) 00:07:53.506 11746.068 - 11796.480: 78.2303% ( 23) 00:07:53.506 11796.480 - 11846.892: 78.5025% ( 31) 00:07:53.506 11846.892 - 11897.305: 78.7307% ( 26) 00:07:53.506 11897.305 - 11947.717: 79.0467% ( 36) 00:07:53.506 11947.717 - 11998.129: 79.2837% ( 27) 00:07:53.506 11998.129 - 12048.542: 79.5909% ( 35) 00:07:53.506 12048.542 - 12098.954: 79.8367% ( 28) 00:07:53.506 12098.954 - 12149.366: 80.1966% ( 41) 00:07:53.506 12149.366 - 12199.778: 80.5126% ( 36) 00:07:53.506 12199.778 - 12250.191: 80.8550% ( 39) 00:07:53.506 12250.191 - 12300.603: 81.1447% ( 33) 00:07:53.506 12300.603 - 12351.015: 81.4958% ( 40) 00:07:53.506 12351.015 - 12401.428: 81.8908% ( 45) 00:07:53.506 12401.428 - 12451.840: 82.2858% ( 45) 00:07:53.506 12451.840 - 12502.252: 82.7598% ( 54) 00:07:53.506 12502.252 - 12552.665: 83.2338% ( 54) 00:07:53.506 12552.665 - 12603.077: 83.7430% ( 58) 00:07:53.506 12603.077 - 12653.489: 84.1819% ( 50) 00:07:53.506 12653.489 - 12703.902: 84.6822% ( 57) 00:07:53.506 12703.902 - 12754.314: 85.0597% ( 43) 00:07:53.506 12754.314 - 12804.726: 85.3933% ( 38) 00:07:53.506 12804.726 - 12855.138: 85.7707% ( 43) 00:07:53.507 12855.138 - 12905.551: 86.1394% ( 42) 00:07:53.507 12905.551 - 13006.375: 86.8416% ( 80) 00:07:53.507 13006.375 - 13107.200: 87.5702% ( 83) 00:07:53.507 13107.200 - 13208.025: 88.3164% ( 85) 00:07:53.507 13208.025 - 13308.849: 89.0274% ( 81) 00:07:53.507 13308.849 - 13409.674: 89.5629% ( 61) 00:07:53.507 13409.674 - 13510.498: 90.0281% ( 53) 00:07:53.507 13510.498 - 13611.323: 90.4143% ( 44) 00:07:53.507 13611.323 - 13712.148: 90.7391% ( 37) 00:07:53.507 13712.148 - 13812.972: 91.0288% ( 33) 00:07:53.507 13812.972 - 13913.797: 91.3272% ( 34) 00:07:53.507 13913.797 - 14014.622: 91.5643% ( 27) 00:07:53.507 14014.622 - 14115.446: 91.7925% ( 26) 00:07:53.507 14115.446 - 14216.271: 91.9768% ( 21) 00:07:53.507 14216.271 - 14317.095: 92.1787% ( 23) 00:07:53.507 14317.095 - 14417.920: 92.4684% ( 33) 00:07:53.507 14417.920 - 14518.745: 92.7669% ( 34) 00:07:53.507 14518.745 - 14619.569: 93.1092% ( 39) 00:07:53.507 14619.569 - 14720.394: 93.4428% ( 38) 00:07:53.507 14720.394 - 14821.218: 93.8114% ( 42) 00:07:53.507 14821.218 - 14922.043: 94.1275% ( 36) 00:07:53.507 14922.043 - 15022.868: 94.3996% ( 31) 00:07:53.507 15022.868 - 15123.692: 94.6717% ( 31) 00:07:53.507 15123.692 - 15224.517: 94.9350% ( 30) 00:07:53.507 15224.517 - 15325.342: 95.2072% ( 31) 00:07:53.507 15325.342 - 15426.166: 95.4178% ( 24) 00:07:53.507 15426.166 - 15526.991: 95.6373% ( 25) 00:07:53.507 15526.991 - 15627.815: 95.8655% ( 26) 00:07:53.507 15627.815 - 15728.640: 96.0762% ( 24) 00:07:53.507 15728.640 - 15829.465: 96.2254% ( 17) 00:07:53.507 15829.465 - 15930.289: 96.3659% ( 16) 00:07:53.507 15930.289 - 16031.114: 96.5063% ( 16) 00:07:53.507 16031.114 - 16131.938: 96.7170% ( 24) 00:07:53.507 16131.938 - 16232.763: 96.8838% ( 19) 00:07:53.507 16232.763 - 16333.588: 97.0330% ( 17) 00:07:53.507 16333.588 - 16434.412: 97.1822% ( 17) 00:07:53.507 16434.412 - 16535.237: 97.2963% ( 13) 00:07:53.507 16535.237 - 16636.062: 97.3841% ( 10) 00:07:53.507 16636.062 - 16736.886: 97.4719% ( 10) 00:07:53.507 16736.886 - 16837.711: 97.5685% ( 11) 00:07:53.507 16837.711 - 16938.535: 97.6562% ( 10) 00:07:53.507 16938.535 - 17039.360: 97.7353% ( 9) 00:07:53.507 17039.360 - 17140.185: 97.7528% ( 2) 00:07:53.507 19358.326 - 19459.151: 97.7967% ( 5) 00:07:53.507 19459.151 - 19559.975: 97.8494% ( 6) 00:07:53.507 19559.975 - 19660.800: 97.9108% ( 7) 00:07:53.507 19660.800 - 19761.625: 97.9986% ( 10) 00:07:53.507 19761.625 - 19862.449: 98.1127% ( 13) 00:07:53.507 19862.449 - 19963.274: 98.2356% ( 14) 00:07:53.507 19963.274 - 20064.098: 98.3409% ( 12) 00:07:53.507 20064.098 - 20164.923: 98.4638% ( 14) 00:07:53.507 20164.923 - 20265.748: 98.5779% ( 13) 00:07:53.507 20265.748 - 20366.572: 98.6833% ( 12) 00:07:53.507 20366.572 - 20467.397: 98.7447% ( 7) 00:07:53.507 20467.397 - 20568.222: 98.7974% ( 6) 00:07:53.507 20568.222 - 20669.046: 98.8588% ( 7) 00:07:53.507 20669.046 - 20769.871: 98.8764% ( 2) 00:07:53.507 22988.012 - 23088.837: 98.9027% ( 3) 00:07:53.507 23088.837 - 23189.662: 98.9379% ( 4) 00:07:53.507 23189.662 - 23290.486: 98.9730% ( 4) 00:07:53.507 23290.486 - 23391.311: 99.0081% ( 4) 00:07:53.507 23391.311 - 23492.135: 99.0520% ( 5) 00:07:53.507 23492.135 - 23592.960: 99.0871% ( 4) 00:07:53.507 23592.960 - 23693.785: 99.1222% ( 4) 00:07:53.507 23693.785 - 23794.609: 99.1573% ( 4) 00:07:53.507 23794.609 - 23895.434: 99.1924% ( 4) 00:07:53.507 23895.434 - 23996.258: 99.2363% ( 5) 00:07:53.507 23996.258 - 24097.083: 99.2714% ( 4) 00:07:53.507 24097.083 - 24197.908: 99.3065% ( 4) 00:07:53.507 24197.908 - 24298.732: 99.3416% ( 4) 00:07:53.507 24298.732 - 24399.557: 99.3855% ( 5) 00:07:53.507 24399.557 - 24500.382: 99.4119% ( 3) 00:07:53.507 24500.382 - 24601.206: 99.4382% ( 3) 00:07:53.507 30852.332 - 31053.982: 99.4733% ( 4) 00:07:53.507 31053.982 - 31255.631: 99.5435% ( 8) 00:07:53.507 31255.631 - 31457.280: 99.6138% ( 8) 00:07:53.507 31457.280 - 31658.929: 99.6928% ( 9) 00:07:53.507 31658.929 - 31860.578: 99.7718% ( 9) 00:07:53.507 31860.578 - 32062.228: 99.8420% ( 8) 00:07:53.507 32062.228 - 32263.877: 99.9122% ( 8) 00:07:53.507 32263.877 - 32465.526: 99.9737% ( 7) 00:07:53.507 32465.526 - 32667.175: 100.0000% ( 3) 00:07:53.507 00:07:53.507 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:53.507 ============================================================================== 00:07:53.507 Range in us Cumulative IO count 00:07:53.507 8872.566 - 8922.978: 0.0527% ( 6) 00:07:53.507 8922.978 - 8973.391: 0.1141% ( 7) 00:07:53.507 8973.391 - 9023.803: 0.2195% ( 12) 00:07:53.507 9023.803 - 9074.215: 0.4038% ( 21) 00:07:53.507 9074.215 - 9124.628: 0.6232% ( 25) 00:07:53.507 9124.628 - 9175.040: 0.9305% ( 35) 00:07:53.507 9175.040 - 9225.452: 1.5011% ( 65) 00:07:53.507 9225.452 - 9275.865: 2.1682% ( 76) 00:07:53.507 9275.865 - 9326.277: 2.8178% ( 74) 00:07:53.507 9326.277 - 9376.689: 3.4410% ( 71) 00:07:53.507 9376.689 - 9427.102: 4.3452% ( 103) 00:07:53.507 9427.102 - 9477.514: 5.4863% ( 130) 00:07:53.507 9477.514 - 9527.926: 6.6977% ( 138) 00:07:53.507 9527.926 - 9578.338: 8.0056% ( 149) 00:07:53.507 9578.338 - 9628.751: 9.4803% ( 168) 00:07:53.507 9628.751 - 9679.163: 11.3501% ( 213) 00:07:53.507 9679.163 - 9729.575: 13.2812% ( 220) 00:07:53.507 9729.575 - 9779.988: 15.2212% ( 221) 00:07:53.507 9779.988 - 9830.400: 17.2051% ( 226) 00:07:53.507 9830.400 - 9880.812: 19.2855% ( 237) 00:07:53.507 9880.812 - 9931.225: 21.5941% ( 263) 00:07:53.507 9931.225 - 9981.637: 23.8940% ( 262) 00:07:53.507 9981.637 - 10032.049: 26.3957% ( 285) 00:07:53.507 10032.049 - 10082.462: 29.0379% ( 301) 00:07:53.507 10082.462 - 10132.874: 31.7416% ( 308) 00:07:53.507 10132.874 - 10183.286: 34.2433% ( 285) 00:07:53.507 10183.286 - 10233.698: 36.8943% ( 302) 00:07:53.507 10233.698 - 10284.111: 39.4312% ( 289) 00:07:53.507 10284.111 - 10334.523: 41.9505% ( 287) 00:07:53.507 10334.523 - 10384.935: 44.4347% ( 283) 00:07:53.507 10384.935 - 10435.348: 46.9101% ( 282) 00:07:53.507 10435.348 - 10485.760: 49.3592% ( 279) 00:07:53.507 10485.760 - 10536.172: 51.6415% ( 260) 00:07:53.507 10536.172 - 10586.585: 53.7219% ( 237) 00:07:53.507 10586.585 - 10636.997: 55.5916% ( 213) 00:07:53.507 10636.997 - 10687.409: 57.5930% ( 228) 00:07:53.507 10687.409 - 10737.822: 59.5418% ( 222) 00:07:53.507 10737.822 - 10788.234: 61.3237% ( 203) 00:07:53.507 10788.234 - 10838.646: 63.1057% ( 203) 00:07:53.507 10838.646 - 10889.058: 64.7121% ( 183) 00:07:53.507 10889.058 - 10939.471: 66.1605% ( 165) 00:07:53.507 10939.471 - 10989.883: 67.5211% ( 155) 00:07:53.507 10989.883 - 11040.295: 68.7676% ( 142) 00:07:53.507 11040.295 - 11090.708: 69.8824% ( 127) 00:07:53.507 11090.708 - 11141.120: 70.8919% ( 115) 00:07:53.507 11141.120 - 11191.532: 71.7521% ( 98) 00:07:53.507 11191.532 - 11241.945: 72.6475% ( 102) 00:07:53.507 11241.945 - 11292.357: 73.4989% ( 97) 00:07:53.507 11292.357 - 11342.769: 74.3153% ( 93) 00:07:53.507 11342.769 - 11393.182: 75.0527% ( 84) 00:07:53.507 11393.182 - 11443.594: 75.7900% ( 84) 00:07:53.507 11443.594 - 11494.006: 76.5098% ( 82) 00:07:53.507 11494.006 - 11544.418: 77.1155% ( 69) 00:07:53.507 11544.418 - 11594.831: 77.5632% ( 51) 00:07:53.507 11594.831 - 11645.243: 78.0284% ( 53) 00:07:53.507 11645.243 - 11695.655: 78.4322% ( 46) 00:07:53.507 11695.655 - 11746.068: 78.8272% ( 45) 00:07:53.507 11746.068 - 11796.480: 79.1871% ( 41) 00:07:53.507 11796.480 - 11846.892: 79.5734% ( 44) 00:07:53.507 11846.892 - 11897.305: 79.9684% ( 45) 00:07:53.507 11897.305 - 11947.717: 80.4249% ( 52) 00:07:53.507 11947.717 - 11998.129: 80.8638% ( 50) 00:07:53.507 11998.129 - 12048.542: 81.2763% ( 47) 00:07:53.507 12048.542 - 12098.954: 81.7152% ( 50) 00:07:53.507 12098.954 - 12149.366: 82.1366% ( 48) 00:07:53.507 12149.366 - 12199.778: 82.5404% ( 46) 00:07:53.507 12199.778 - 12250.191: 82.9529% ( 47) 00:07:53.507 12250.191 - 12300.603: 83.3392% ( 44) 00:07:53.507 12300.603 - 12351.015: 83.6903% ( 40) 00:07:53.507 12351.015 - 12401.428: 84.0151% ( 37) 00:07:53.507 12401.428 - 12451.840: 84.2872% ( 31) 00:07:53.507 12451.840 - 12502.252: 84.5681% ( 32) 00:07:53.507 12502.252 - 12552.665: 84.8490% ( 32) 00:07:53.507 12552.665 - 12603.077: 85.1387% ( 33) 00:07:53.507 12603.077 - 12653.489: 85.3757% ( 27) 00:07:53.507 12653.489 - 12703.902: 85.5776% ( 23) 00:07:53.507 12703.902 - 12754.314: 85.8234% ( 28) 00:07:53.507 12754.314 - 12804.726: 86.0516% ( 26) 00:07:53.507 12804.726 - 12855.138: 86.3062% ( 29) 00:07:53.507 12855.138 - 12905.551: 86.5607% ( 29) 00:07:53.507 12905.551 - 13006.375: 87.0699% ( 58) 00:07:53.507 13006.375 - 13107.200: 87.5176% ( 51) 00:07:53.507 13107.200 - 13208.025: 87.9652% ( 51) 00:07:53.507 13208.025 - 13308.849: 88.3954% ( 49) 00:07:53.508 13308.849 - 13409.674: 88.8430% ( 51) 00:07:53.508 13409.674 - 13510.498: 89.2644% ( 48) 00:07:53.508 13510.498 - 13611.323: 89.7647% ( 57) 00:07:53.508 13611.323 - 13712.148: 90.2124% ( 51) 00:07:53.508 13712.148 - 13812.972: 90.5723% ( 41) 00:07:53.508 13812.972 - 13913.797: 90.9235% ( 40) 00:07:53.508 13913.797 - 14014.622: 91.3009% ( 43) 00:07:53.508 14014.622 - 14115.446: 91.5906% ( 33) 00:07:53.508 14115.446 - 14216.271: 91.8452% ( 29) 00:07:53.508 14216.271 - 14317.095: 92.1348% ( 33) 00:07:53.508 14317.095 - 14417.920: 92.4157% ( 32) 00:07:53.508 14417.920 - 14518.745: 92.7054% ( 33) 00:07:53.508 14518.745 - 14619.569: 93.0302% ( 37) 00:07:53.508 14619.569 - 14720.394: 93.3901% ( 41) 00:07:53.508 14720.394 - 14821.218: 93.7763% ( 44) 00:07:53.508 14821.218 - 14922.043: 94.1362% ( 41) 00:07:53.508 14922.043 - 15022.868: 94.4259% ( 33) 00:07:53.508 15022.868 - 15123.692: 94.7244% ( 34) 00:07:53.508 15123.692 - 15224.517: 95.0053% ( 32) 00:07:53.508 15224.517 - 15325.342: 95.2247% ( 25) 00:07:53.508 15325.342 - 15426.166: 95.4003% ( 20) 00:07:53.508 15426.166 - 15526.991: 95.6022% ( 23) 00:07:53.508 15526.991 - 15627.815: 95.7777% ( 20) 00:07:53.508 15627.815 - 15728.640: 95.8919% ( 13) 00:07:53.508 15728.640 - 15829.465: 95.9884% ( 11) 00:07:53.508 15829.465 - 15930.289: 96.0762% ( 10) 00:07:53.508 15930.289 - 16031.114: 96.1728% ( 11) 00:07:53.508 16031.114 - 16131.938: 96.3659% ( 22) 00:07:53.508 16131.938 - 16232.763: 96.4975% ( 15) 00:07:53.508 16232.763 - 16333.588: 96.6468% ( 17) 00:07:53.508 16333.588 - 16434.412: 96.8048% ( 18) 00:07:53.508 16434.412 - 16535.237: 96.9716% ( 19) 00:07:53.508 16535.237 - 16636.062: 97.0593% ( 10) 00:07:53.508 16636.062 - 16736.886: 97.1471% ( 10) 00:07:53.508 16736.886 - 16837.711: 97.2437% ( 11) 00:07:53.508 16837.711 - 16938.535: 97.3315% ( 10) 00:07:53.508 16938.535 - 17039.360: 97.4192% ( 10) 00:07:53.508 17039.360 - 17140.185: 97.4631% ( 5) 00:07:53.508 17140.185 - 17241.009: 97.5246% ( 7) 00:07:53.508 17241.009 - 17341.834: 97.5860% ( 7) 00:07:53.508 17341.834 - 17442.658: 97.6387% ( 6) 00:07:53.508 17442.658 - 17543.483: 97.7001% ( 7) 00:07:53.508 17543.483 - 17644.308: 97.8143% ( 13) 00:07:53.508 17644.308 - 17745.132: 97.8581% ( 5) 00:07:53.508 17745.132 - 17845.957: 97.9196% ( 7) 00:07:53.508 17845.957 - 17946.782: 97.9810% ( 7) 00:07:53.508 17946.782 - 18047.606: 98.0425% ( 7) 00:07:53.508 18047.606 - 18148.431: 98.1039% ( 7) 00:07:53.508 18148.431 - 18249.255: 98.1654% ( 7) 00:07:53.508 18249.255 - 18350.080: 98.2268% ( 7) 00:07:53.508 18350.080 - 18450.905: 98.2883% ( 7) 00:07:53.508 18450.905 - 18551.729: 98.3146% ( 3) 00:07:53.508 19862.449 - 19963.274: 98.3497% ( 4) 00:07:53.508 19963.274 - 20064.098: 98.4024% ( 6) 00:07:53.508 20064.098 - 20164.923: 98.4638% ( 7) 00:07:53.508 20164.923 - 20265.748: 98.5165% ( 6) 00:07:53.508 20265.748 - 20366.572: 98.5779% ( 7) 00:07:53.508 20366.572 - 20467.397: 98.6306% ( 6) 00:07:53.508 20467.397 - 20568.222: 98.6921% ( 7) 00:07:53.508 20568.222 - 20669.046: 98.7360% ( 5) 00:07:53.508 20669.046 - 20769.871: 98.7974% ( 7) 00:07:53.508 20769.871 - 20870.695: 98.8588% ( 7) 00:07:53.508 20870.695 - 20971.520: 98.8764% ( 2) 00:07:53.508 21677.292 - 21778.117: 98.9203% ( 5) 00:07:53.508 21778.117 - 21878.942: 98.9554% ( 4) 00:07:53.508 21878.942 - 21979.766: 98.9905% ( 4) 00:07:53.508 21979.766 - 22080.591: 99.0256% ( 4) 00:07:53.508 22080.591 - 22181.415: 99.0607% ( 4) 00:07:53.508 22181.415 - 22282.240: 99.0959% ( 4) 00:07:53.508 22282.240 - 22383.065: 99.1310% ( 4) 00:07:53.508 22383.065 - 22483.889: 99.1661% ( 4) 00:07:53.508 22483.889 - 22584.714: 99.2012% ( 4) 00:07:53.508 22584.714 - 22685.538: 99.2363% ( 4) 00:07:53.508 22685.538 - 22786.363: 99.2802% ( 5) 00:07:53.508 22786.363 - 22887.188: 99.3153% ( 4) 00:07:53.508 22887.188 - 22988.012: 99.3504% ( 4) 00:07:53.508 22988.012 - 23088.837: 99.3855% ( 4) 00:07:53.508 23088.837 - 23189.662: 99.4294% ( 5) 00:07:53.508 23189.662 - 23290.486: 99.4382% ( 1) 00:07:53.508 29642.437 - 29844.086: 99.4996% ( 7) 00:07:53.508 29844.086 - 30045.735: 99.5699% ( 8) 00:07:53.508 30045.735 - 30247.385: 99.6401% ( 8) 00:07:53.508 30247.385 - 30449.034: 99.7103% ( 8) 00:07:53.508 30449.034 - 30650.683: 99.7893% ( 9) 00:07:53.508 30650.683 - 30852.332: 99.8596% ( 8) 00:07:53.508 30852.332 - 31053.982: 99.9298% ( 8) 00:07:53.508 31053.982 - 31255.631: 100.0000% ( 8) 00:07:53.508 00:07:53.508 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:53.508 ============================================================================== 00:07:53.508 Range in us Cumulative IO count 00:07:53.508 8872.566 - 8922.978: 0.0873% ( 10) 00:07:53.508 8922.978 - 8973.391: 0.1309% ( 5) 00:07:53.508 8973.391 - 9023.803: 0.1746% ( 5) 00:07:53.508 9023.803 - 9074.215: 0.3841% ( 24) 00:07:53.508 9074.215 - 9124.628: 0.6372% ( 29) 00:07:53.508 9124.628 - 9175.040: 0.8904% ( 29) 00:07:53.508 9175.040 - 9225.452: 1.5276% ( 73) 00:07:53.508 9225.452 - 9275.865: 2.0339% ( 58) 00:07:53.508 9275.865 - 9326.277: 2.6885% ( 75) 00:07:53.508 9326.277 - 9376.689: 3.4567% ( 88) 00:07:53.508 9376.689 - 9427.102: 4.5304% ( 123) 00:07:53.508 9427.102 - 9477.514: 5.6564% ( 129) 00:07:53.508 9477.514 - 9527.926: 7.0269% ( 157) 00:07:53.508 9527.926 - 9578.338: 8.3973% ( 157) 00:07:53.508 9578.338 - 9628.751: 9.8638% ( 168) 00:07:53.508 9628.751 - 9679.163: 11.5223% ( 190) 00:07:53.508 9679.163 - 9729.575: 13.2682% ( 200) 00:07:53.508 9729.575 - 9779.988: 15.0402% ( 203) 00:07:53.508 9779.988 - 9830.400: 16.9518% ( 219) 00:07:53.508 9830.400 - 9880.812: 19.1253% ( 249) 00:07:53.508 9880.812 - 9931.225: 21.3862% ( 259) 00:07:53.508 9931.225 - 9981.637: 23.8041% ( 277) 00:07:53.508 9981.637 - 10032.049: 26.2483% ( 280) 00:07:53.508 10032.049 - 10082.462: 28.8059% ( 293) 00:07:53.508 10082.462 - 10132.874: 31.3635% ( 293) 00:07:53.508 10132.874 - 10183.286: 33.8513% ( 285) 00:07:53.508 10183.286 - 10233.698: 36.3041% ( 281) 00:07:53.508 10233.698 - 10284.111: 38.8530% ( 292) 00:07:53.508 10284.111 - 10334.523: 41.4717% ( 300) 00:07:53.508 10334.523 - 10384.935: 43.9071% ( 279) 00:07:53.508 10384.935 - 10435.348: 46.2291% ( 266) 00:07:53.508 10435.348 - 10485.760: 48.6732% ( 280) 00:07:53.508 10485.760 - 10536.172: 51.0038% ( 267) 00:07:53.508 10536.172 - 10586.585: 53.0814% ( 238) 00:07:53.508 10586.585 - 10636.997: 55.2374% ( 247) 00:07:53.508 10636.997 - 10687.409: 57.0618% ( 209) 00:07:53.508 10687.409 - 10737.822: 58.9560% ( 217) 00:07:53.508 10737.822 - 10788.234: 60.7367% ( 204) 00:07:53.508 10788.234 - 10838.646: 62.2207% ( 170) 00:07:53.508 10838.646 - 10889.058: 63.7483% ( 175) 00:07:53.508 10889.058 - 10939.471: 65.2584% ( 173) 00:07:53.508 10939.471 - 10989.883: 66.5939% ( 153) 00:07:53.508 10989.883 - 11040.295: 68.0429% ( 166) 00:07:53.508 11040.295 - 11090.708: 69.3348% ( 148) 00:07:53.508 11090.708 - 11141.120: 70.4522% ( 128) 00:07:53.508 11141.120 - 11191.532: 71.4909% ( 119) 00:07:53.508 11191.532 - 11241.945: 72.4598% ( 111) 00:07:53.508 11241.945 - 11292.357: 73.3502% ( 102) 00:07:53.508 11292.357 - 11342.769: 74.1882% ( 96) 00:07:53.508 11342.769 - 11393.182: 74.9040% ( 82) 00:07:53.508 11393.182 - 11443.594: 75.5936% ( 79) 00:07:53.508 11443.594 - 11494.006: 76.2308% ( 73) 00:07:53.508 11494.006 - 11544.418: 76.8156% ( 67) 00:07:53.508 11544.418 - 11594.831: 77.4179% ( 69) 00:07:53.508 11594.831 - 11645.243: 77.9330% ( 59) 00:07:53.508 11645.243 - 11695.655: 78.4654% ( 61) 00:07:53.508 11695.655 - 11746.068: 78.9804% ( 59) 00:07:53.508 11746.068 - 11796.480: 79.3733% ( 45) 00:07:53.508 11796.480 - 11846.892: 79.7573% ( 44) 00:07:53.508 11846.892 - 11897.305: 80.1152% ( 41) 00:07:53.508 11897.305 - 11947.717: 80.5342% ( 48) 00:07:53.508 11947.717 - 11998.129: 80.8747% ( 39) 00:07:53.508 11998.129 - 12048.542: 81.2325% ( 41) 00:07:53.508 12048.542 - 12098.954: 81.6253% ( 45) 00:07:53.508 12098.954 - 12149.366: 81.9920% ( 42) 00:07:53.508 12149.366 - 12199.778: 82.3149% ( 37) 00:07:53.508 12199.778 - 12250.191: 82.6292% ( 36) 00:07:53.508 12250.191 - 12300.603: 82.9609% ( 38) 00:07:53.508 12300.603 - 12351.015: 83.2751% ( 36) 00:07:53.508 12351.015 - 12401.428: 83.5719% ( 34) 00:07:53.508 12401.428 - 12451.840: 83.8338% ( 30) 00:07:53.508 12451.840 - 12502.252: 84.1393% ( 35) 00:07:53.508 12502.252 - 12552.665: 84.4361% ( 34) 00:07:53.508 12552.665 - 12603.077: 84.7242% ( 33) 00:07:53.508 12603.077 - 12653.489: 84.9249% ( 23) 00:07:53.509 12653.489 - 12703.902: 85.1257% ( 23) 00:07:53.509 12703.902 - 12754.314: 85.3003% ( 20) 00:07:53.509 12754.314 - 12804.726: 85.5272% ( 26) 00:07:53.509 12804.726 - 12855.138: 85.7542% ( 26) 00:07:53.509 12855.138 - 12905.551: 85.9811% ( 26) 00:07:53.509 12905.551 - 13006.375: 86.4612% ( 55) 00:07:53.509 13006.375 - 13107.200: 86.8453% ( 44) 00:07:53.509 13107.200 - 13208.025: 87.2032% ( 41) 00:07:53.509 13208.025 - 13308.849: 87.6309% ( 49) 00:07:53.509 13308.849 - 13409.674: 88.0499% ( 48) 00:07:53.509 13409.674 - 13510.498: 88.4951% ( 51) 00:07:53.509 13510.498 - 13611.323: 89.0363% ( 62) 00:07:53.509 13611.323 - 13712.148: 89.7259% ( 79) 00:07:53.509 13712.148 - 13812.972: 90.3020% ( 66) 00:07:53.509 13812.972 - 13913.797: 90.8432% ( 62) 00:07:53.509 13913.797 - 14014.622: 91.3757% ( 61) 00:07:53.509 14014.622 - 14115.446: 91.9256% ( 63) 00:07:53.509 14115.446 - 14216.271: 92.4581% ( 61) 00:07:53.509 14216.271 - 14317.095: 93.0168% ( 64) 00:07:53.509 14317.095 - 14417.920: 93.5667% ( 63) 00:07:53.509 14417.920 - 14518.745: 93.8984% ( 38) 00:07:53.509 14518.745 - 14619.569: 94.1690% ( 31) 00:07:53.509 14619.569 - 14720.394: 94.4047% ( 27) 00:07:53.509 14720.394 - 14821.218: 94.5618% ( 18) 00:07:53.509 14821.218 - 14922.043: 94.7102% ( 17) 00:07:53.509 14922.043 - 15022.868: 94.7975% ( 10) 00:07:53.509 15022.868 - 15123.692: 95.0157% ( 25) 00:07:53.509 15123.692 - 15224.517: 95.2689% ( 29) 00:07:53.509 15224.517 - 15325.342: 95.5395% ( 31) 00:07:53.509 15325.342 - 15426.166: 95.8013% ( 30) 00:07:53.509 15426.166 - 15526.991: 96.0894% ( 33) 00:07:53.509 15526.991 - 15627.815: 96.3774% ( 33) 00:07:53.509 15627.815 - 15728.640: 96.6219% ( 28) 00:07:53.509 15728.640 - 15829.465: 96.8052% ( 21) 00:07:53.509 15829.465 - 15930.289: 96.9972% ( 22) 00:07:53.509 15930.289 - 16031.114: 97.1281% ( 15) 00:07:53.509 16031.114 - 16131.938: 97.2154% ( 10) 00:07:53.509 16131.938 - 16232.763: 97.3464% ( 15) 00:07:53.509 16232.763 - 16333.588: 97.4686% ( 14) 00:07:53.509 16333.588 - 16434.412: 97.5559% ( 10) 00:07:53.509 16434.412 - 16535.237: 97.6344% ( 9) 00:07:53.509 16535.237 - 16636.062: 97.6868% ( 6) 00:07:53.509 16636.062 - 16736.886: 97.7392% ( 6) 00:07:53.509 16736.886 - 16837.711: 97.8177% ( 9) 00:07:53.509 16837.711 - 16938.535: 97.8788% ( 7) 00:07:53.509 16938.535 - 17039.360: 97.9050% ( 3) 00:07:53.509 17039.360 - 17140.185: 97.9225% ( 2) 00:07:53.509 17140.185 - 17241.009: 97.9836% ( 7) 00:07:53.509 17241.009 - 17341.834: 98.0360% ( 6) 00:07:53.509 17341.834 - 17442.658: 98.0971% ( 7) 00:07:53.509 17442.658 - 17543.483: 98.1582% ( 7) 00:07:53.509 17543.483 - 17644.308: 98.2542% ( 11) 00:07:53.509 17644.308 - 17745.132: 98.3677% ( 13) 00:07:53.509 17745.132 - 17845.957: 98.4899% ( 14) 00:07:53.509 17845.957 - 17946.782: 98.6121% ( 14) 00:07:53.509 17946.782 - 18047.606: 98.7343% ( 14) 00:07:53.509 18047.606 - 18148.431: 98.8041% ( 8) 00:07:53.509 18148.431 - 18249.255: 98.8565% ( 6) 00:07:53.509 18249.255 - 18350.080: 98.8827% ( 3) 00:07:53.509 18854.203 - 18955.028: 98.9001% ( 2) 00:07:53.509 18955.028 - 19055.852: 98.9438% ( 5) 00:07:53.509 19055.852 - 19156.677: 98.9700% ( 3) 00:07:53.509 19156.677 - 19257.502: 98.9962% ( 3) 00:07:53.509 19257.502 - 19358.326: 99.0136% ( 2) 00:07:53.509 20568.222 - 20669.046: 99.0311% ( 2) 00:07:53.509 20669.046 - 20769.871: 99.0660% ( 4) 00:07:53.509 20769.871 - 20870.695: 99.1184% ( 6) 00:07:53.509 21072.345 - 21173.169: 99.1707% ( 6) 00:07:53.509 21173.169 - 21273.994: 99.2231% ( 6) 00:07:53.509 21273.994 - 21374.818: 99.2842% ( 7) 00:07:53.509 21374.818 - 21475.643: 99.3453% ( 7) 00:07:53.509 21475.643 - 21576.468: 99.3977% ( 6) 00:07:53.509 21576.468 - 21677.292: 99.4588% ( 7) 00:07:53.509 21677.292 - 21778.117: 99.5112% ( 6) 00:07:53.509 21778.117 - 21878.942: 99.5461% ( 4) 00:07:53.509 21878.942 - 21979.766: 99.5810% ( 4) 00:07:53.509 21979.766 - 22080.591: 99.6159% ( 4) 00:07:53.509 22080.591 - 22181.415: 99.6508% ( 4) 00:07:53.509 22181.415 - 22282.240: 99.6858% ( 4) 00:07:53.509 22282.240 - 22383.065: 99.7207% ( 4) 00:07:53.509 22383.065 - 22483.889: 99.7643% ( 5) 00:07:53.509 22483.889 - 22584.714: 99.7992% ( 4) 00:07:53.509 22584.714 - 22685.538: 99.8341% ( 4) 00:07:53.509 22685.538 - 22786.363: 99.8778% ( 5) 00:07:53.509 22786.363 - 22887.188: 99.9127% ( 4) 00:07:53.509 22887.188 - 22988.012: 99.9476% ( 4) 00:07:53.509 22988.012 - 23088.837: 99.9825% ( 4) 00:07:53.509 23088.837 - 23189.662: 100.0000% ( 2) 00:07:53.509 00:07:53.509 17:38:16 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:54.891 Initializing NVMe Controllers 00:07:54.891 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.891 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.891 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.891 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.891 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:54.891 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:54.891 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:54.891 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:54.891 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:54.891 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:54.891 Initialization complete. Launching workers. 00:07:54.891 ======================================================== 00:07:54.891 Latency(us) 00:07:54.891 Device Information : IOPS MiB/s Average min max 00:07:54.891 PCIE (0000:00:10.0) NSID 1 from core 0: 9261.70 108.54 13861.11 9015.98 36939.47 00:07:54.891 PCIE (0000:00:11.0) NSID 1 from core 0: 9261.70 108.54 13846.51 9072.79 35219.96 00:07:54.891 PCIE (0000:00:13.0) NSID 1 from core 0: 9261.70 108.54 13831.59 9055.57 34422.53 00:07:54.891 PCIE (0000:00:12.0) NSID 1 from core 0: 9261.70 108.54 13817.06 9317.26 32791.52 00:07:54.891 PCIE (0000:00:12.0) NSID 2 from core 0: 9261.70 108.54 13802.41 8918.02 31299.74 00:07:54.891 PCIE (0000:00:12.0) NSID 3 from core 0: 9325.14 109.28 13694.20 9179.57 24381.96 00:07:54.891 ======================================================== 00:07:54.891 Total : 55633.64 651.96 13808.68 8918.02 36939.47 00:07:54.891 00:07:54.891 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.891 ================================================================================= 00:07:54.891 1.00000% : 9578.338us 00:07:54.891 10.00000% : 10384.935us 00:07:54.891 25.00000% : 11494.006us 00:07:54.891 50.00000% : 13510.498us 00:07:54.891 75.00000% : 15526.991us 00:07:54.891 90.00000% : 17341.834us 00:07:54.891 95.00000% : 19257.502us 00:07:54.891 98.00000% : 21072.345us 00:07:54.891 99.00000% : 28230.892us 00:07:54.891 99.50000% : 35893.563us 00:07:54.891 99.90000% : 36700.160us 00:07:54.891 99.99000% : 37103.458us 00:07:54.891 99.99900% : 37103.458us 00:07:54.891 99.99990% : 37103.458us 00:07:54.891 99.99999% : 37103.458us 00:07:54.891 00:07:54.891 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.891 ================================================================================= 00:07:54.891 1.00000% : 9779.988us 00:07:54.891 10.00000% : 10384.935us 00:07:54.891 25.00000% : 11443.594us 00:07:54.891 50.00000% : 13510.498us 00:07:54.891 75.00000% : 15325.342us 00:07:54.891 90.00000% : 17341.834us 00:07:54.891 95.00000% : 19156.677us 00:07:54.891 98.00000% : 21475.643us 00:07:54.891 99.00000% : 27222.646us 00:07:54.891 99.50000% : 34280.369us 00:07:54.891 99.90000% : 35086.966us 00:07:54.891 99.99000% : 35288.615us 00:07:54.891 99.99900% : 35288.615us 00:07:54.891 99.99990% : 35288.615us 00:07:54.891 99.99999% : 35288.615us 00:07:54.891 00:07:54.891 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.891 ================================================================================= 00:07:54.891 1.00000% : 9628.751us 00:07:54.891 10.00000% : 10435.348us 00:07:54.891 25.00000% : 11292.357us 00:07:54.891 50.00000% : 13510.498us 00:07:54.891 75.00000% : 15325.342us 00:07:54.891 90.00000% : 17543.483us 00:07:54.891 95.00000% : 18753.378us 00:07:54.891 98.00000% : 21475.643us 00:07:54.891 99.00000% : 27222.646us 00:07:54.891 99.50000% : 33272.123us 00:07:54.891 99.90000% : 34280.369us 00:07:54.891 99.99000% : 34482.018us 00:07:54.891 99.99900% : 34482.018us 00:07:54.891 99.99990% : 34482.018us 00:07:54.891 99.99999% : 34482.018us 00:07:54.891 00:07:54.891 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.891 ================================================================================= 00:07:54.891 1.00000% : 9729.575us 00:07:54.891 10.00000% : 10536.172us 00:07:54.891 25.00000% : 11443.594us 00:07:54.891 50.00000% : 13510.498us 00:07:54.891 75.00000% : 15426.166us 00:07:54.891 90.00000% : 17543.483us 00:07:54.891 95.00000% : 19055.852us 00:07:54.891 98.00000% : 21374.818us 00:07:54.891 99.00000% : 25710.277us 00:07:54.891 99.50000% : 31860.578us 00:07:54.891 99.90000% : 32667.175us 00:07:54.891 99.99000% : 32868.825us 00:07:54.891 99.99900% : 32868.825us 00:07:54.891 99.99990% : 32868.825us 00:07:54.891 99.99999% : 32868.825us 00:07:54.891 00:07:54.891 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.891 ================================================================================= 00:07:54.891 1.00000% : 9578.338us 00:07:54.891 10.00000% : 10485.760us 00:07:54.891 25.00000% : 11443.594us 00:07:54.891 50.00000% : 13510.498us 00:07:54.891 75.00000% : 15325.342us 00:07:54.891 90.00000% : 17442.658us 00:07:54.891 95.00000% : 19257.502us 00:07:54.891 98.00000% : 22080.591us 00:07:54.891 99.00000% : 24298.732us 00:07:54.891 99.50000% : 30247.385us 00:07:54.891 99.90000% : 31255.631us 00:07:54.891 99.99000% : 31457.280us 00:07:54.891 99.99900% : 31457.280us 00:07:54.891 99.99990% : 31457.280us 00:07:54.891 99.99999% : 31457.280us 00:07:54.891 00:07:54.891 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.891 ================================================================================= 00:07:54.891 1.00000% : 9679.163us 00:07:54.891 10.00000% : 10384.935us 00:07:54.891 25.00000% : 11443.594us 00:07:54.891 50.00000% : 13510.498us 00:07:54.892 75.00000% : 15426.166us 00:07:54.892 90.00000% : 17140.185us 00:07:54.892 95.00000% : 18955.028us 00:07:54.892 98.00000% : 20467.397us 00:07:54.892 99.00000% : 21979.766us 00:07:54.892 99.50000% : 23290.486us 00:07:54.892 99.90000% : 24197.908us 00:07:54.892 99.99000% : 24399.557us 00:07:54.892 99.99900% : 24399.557us 00:07:54.892 99.99990% : 24399.557us 00:07:54.892 99.99999% : 24399.557us 00:07:54.892 00:07:54.892 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.892 ============================================================================== 00:07:54.892 Range in us Cumulative IO count 00:07:54.892 8973.391 - 9023.803: 0.0107% ( 1) 00:07:54.892 9023.803 - 9074.215: 0.0321% ( 2) 00:07:54.892 9074.215 - 9124.628: 0.0642% ( 3) 00:07:54.892 9124.628 - 9175.040: 0.1177% ( 5) 00:07:54.892 9175.040 - 9225.452: 0.1605% ( 4) 00:07:54.892 9225.452 - 9275.865: 0.2140% ( 5) 00:07:54.892 9275.865 - 9326.277: 0.3211% ( 10) 00:07:54.892 9326.277 - 9376.689: 0.3532% ( 3) 00:07:54.892 9376.689 - 9427.102: 0.3960% ( 4) 00:07:54.892 9427.102 - 9477.514: 0.4602% ( 6) 00:07:54.892 9477.514 - 9527.926: 0.6742% ( 20) 00:07:54.892 9527.926 - 9578.338: 1.0488% ( 35) 00:07:54.892 9578.338 - 9628.751: 1.4127% ( 34) 00:07:54.892 9628.751 - 9679.163: 1.8515% ( 41) 00:07:54.892 9679.163 - 9729.575: 2.3116% ( 43) 00:07:54.892 9729.575 - 9779.988: 2.8574% ( 51) 00:07:54.892 9779.988 - 9830.400: 3.3818% ( 49) 00:07:54.892 9830.400 - 9880.812: 3.9598% ( 54) 00:07:54.892 9880.812 - 9931.225: 4.4949% ( 50) 00:07:54.892 9931.225 - 9981.637: 5.2547% ( 71) 00:07:54.892 9981.637 - 10032.049: 5.9503% ( 65) 00:07:54.892 10032.049 - 10082.462: 6.7316% ( 73) 00:07:54.892 10082.462 - 10132.874: 7.2132% ( 45) 00:07:54.892 10132.874 - 10183.286: 7.7697% ( 52) 00:07:54.892 10183.286 - 10233.698: 8.3690% ( 56) 00:07:54.892 10233.698 - 10284.111: 8.9148% ( 51) 00:07:54.892 10284.111 - 10334.523: 9.5783% ( 62) 00:07:54.892 10334.523 - 10384.935: 10.2205% ( 60) 00:07:54.892 10384.935 - 10435.348: 10.9054% ( 64) 00:07:54.892 10435.348 - 10485.760: 11.8793% ( 91) 00:07:54.892 10485.760 - 10536.172: 12.9067% ( 96) 00:07:54.892 10536.172 - 10586.585: 13.7735% ( 81) 00:07:54.892 10586.585 - 10636.997: 14.4478% ( 63) 00:07:54.892 10636.997 - 10687.409: 15.2183% ( 72) 00:07:54.892 10687.409 - 10737.822: 15.7855% ( 53) 00:07:54.892 10737.822 - 10788.234: 16.5026% ( 67) 00:07:54.892 10788.234 - 10838.646: 17.1340% ( 59) 00:07:54.892 10838.646 - 10889.058: 17.6691% ( 50) 00:07:54.892 10889.058 - 10939.471: 18.3219% ( 61) 00:07:54.892 10939.471 - 10989.883: 19.1888% ( 81) 00:07:54.892 10989.883 - 11040.295: 19.7774% ( 55) 00:07:54.892 11040.295 - 11090.708: 20.4302% ( 61) 00:07:54.892 11090.708 - 11141.120: 21.1580% ( 68) 00:07:54.892 11141.120 - 11191.532: 21.7680% ( 57) 00:07:54.892 11191.532 - 11241.945: 22.4315% ( 62) 00:07:54.892 11241.945 - 11292.357: 22.9131% ( 45) 00:07:54.892 11292.357 - 11342.769: 23.4268% ( 48) 00:07:54.892 11342.769 - 11393.182: 24.0796% ( 61) 00:07:54.892 11393.182 - 11443.594: 24.8288% ( 70) 00:07:54.892 11443.594 - 11494.006: 25.5565% ( 68) 00:07:54.892 11494.006 - 11544.418: 26.2414% ( 64) 00:07:54.892 11544.418 - 11594.831: 26.8836% ( 60) 00:07:54.892 11594.831 - 11645.243: 27.5257% ( 60) 00:07:54.892 11645.243 - 11695.655: 27.9752% ( 42) 00:07:54.892 11695.655 - 11746.068: 28.5959% ( 58) 00:07:54.892 11746.068 - 11796.480: 29.1631% ( 53) 00:07:54.892 11796.480 - 11846.892: 29.8159% ( 61) 00:07:54.892 11846.892 - 11897.305: 30.2012% ( 36) 00:07:54.892 11897.305 - 11947.717: 30.6293% ( 40) 00:07:54.892 11947.717 - 11998.129: 31.2714% ( 60) 00:07:54.892 11998.129 - 12048.542: 31.8814% ( 57) 00:07:54.892 12048.542 - 12098.954: 32.3630% ( 45) 00:07:54.892 12098.954 - 12149.366: 32.8125% ( 42) 00:07:54.892 12149.366 - 12199.778: 33.4332% ( 58) 00:07:54.892 12199.778 - 12250.191: 33.9683% ( 50) 00:07:54.892 12250.191 - 12300.603: 34.4071% ( 41) 00:07:54.892 12300.603 - 12351.015: 35.0920% ( 64) 00:07:54.892 12351.015 - 12401.428: 35.7128% ( 58) 00:07:54.892 12401.428 - 12451.840: 36.3228% ( 57) 00:07:54.892 12451.840 - 12502.252: 36.8579% ( 50) 00:07:54.892 12502.252 - 12552.665: 37.4786% ( 58) 00:07:54.892 12552.665 - 12603.077: 38.0993% ( 58) 00:07:54.892 12603.077 - 12653.489: 38.7414% ( 60) 00:07:54.892 12653.489 - 12703.902: 39.2872% ( 51) 00:07:54.892 12703.902 - 12754.314: 39.8973% ( 57) 00:07:54.892 12754.314 - 12804.726: 40.5715% ( 63) 00:07:54.892 12804.726 - 12855.138: 41.2243% ( 61) 00:07:54.892 12855.138 - 12905.551: 41.9092% ( 64) 00:07:54.892 12905.551 - 13006.375: 42.8938% ( 92) 00:07:54.892 13006.375 - 13107.200: 44.3279% ( 134) 00:07:54.892 13107.200 - 13208.025: 45.9867% ( 155) 00:07:54.892 13208.025 - 13308.849: 47.5171% ( 143) 00:07:54.892 13308.849 - 13409.674: 49.0154% ( 140) 00:07:54.892 13409.674 - 13510.498: 50.5244% ( 141) 00:07:54.892 13510.498 - 13611.323: 52.3652% ( 172) 00:07:54.892 13611.323 - 13712.148: 53.9812% ( 151) 00:07:54.892 13712.148 - 13812.972: 55.3510% ( 128) 00:07:54.892 13812.972 - 13913.797: 56.4212% ( 100) 00:07:54.892 13913.797 - 14014.622: 57.7911% ( 128) 00:07:54.892 14014.622 - 14115.446: 59.1824% ( 130) 00:07:54.892 14115.446 - 14216.271: 60.5094% ( 124) 00:07:54.892 14216.271 - 14317.095: 61.8900% ( 129) 00:07:54.892 14317.095 - 14417.920: 63.5167% ( 152) 00:07:54.892 14417.920 - 14518.745: 64.9615% ( 135) 00:07:54.892 14518.745 - 14619.569: 66.4062% ( 135) 00:07:54.892 14619.569 - 14720.394: 67.8831% ( 138) 00:07:54.892 14720.394 - 14821.218: 69.1995% ( 123) 00:07:54.892 14821.218 - 14922.043: 70.2376% ( 97) 00:07:54.892 14922.043 - 15022.868: 71.0830% ( 79) 00:07:54.892 15022.868 - 15123.692: 72.1854% ( 103) 00:07:54.892 15123.692 - 15224.517: 73.1699% ( 92) 00:07:54.892 15224.517 - 15325.342: 74.1010% ( 87) 00:07:54.892 15325.342 - 15426.166: 74.8930% ( 74) 00:07:54.892 15426.166 - 15526.991: 75.8348% ( 88) 00:07:54.892 15526.991 - 15627.815: 76.9799% ( 107) 00:07:54.892 15627.815 - 15728.640: 77.9431% ( 90) 00:07:54.892 15728.640 - 15829.465: 79.0668% ( 105) 00:07:54.892 15829.465 - 15930.289: 80.1156% ( 98) 00:07:54.892 15930.289 - 16031.114: 80.9182% ( 75) 00:07:54.892 16031.114 - 16131.938: 81.6781% ( 71) 00:07:54.892 16131.938 - 16232.763: 82.5664% ( 83) 00:07:54.892 16232.763 - 16333.588: 83.5188% ( 89) 00:07:54.892 16333.588 - 16434.412: 84.4178% ( 84) 00:07:54.892 16434.412 - 16535.237: 85.4024% ( 92) 00:07:54.892 16535.237 - 16636.062: 86.2479% ( 79) 00:07:54.892 16636.062 - 16736.886: 87.1361% ( 83) 00:07:54.892 16736.886 - 16837.711: 87.8532% ( 67) 00:07:54.892 16837.711 - 16938.535: 88.4311% ( 54) 00:07:54.892 16938.535 - 17039.360: 89.0625% ( 59) 00:07:54.892 17039.360 - 17140.185: 89.6190% ( 52) 00:07:54.892 17140.185 - 17241.009: 89.9829% ( 34) 00:07:54.892 17241.009 - 17341.834: 90.3360% ( 33) 00:07:54.892 17341.834 - 17442.658: 91.0959% ( 71) 00:07:54.892 17442.658 - 17543.483: 91.4919% ( 37) 00:07:54.892 17543.483 - 17644.308: 91.7808% ( 27) 00:07:54.892 17644.308 - 17745.132: 92.0270% ( 23) 00:07:54.892 17745.132 - 17845.957: 92.2410% ( 20) 00:07:54.892 17845.957 - 17946.782: 92.5514% ( 29) 00:07:54.892 17946.782 - 18047.606: 92.9688% ( 39) 00:07:54.892 18047.606 - 18148.431: 93.2577% ( 27) 00:07:54.892 18148.431 - 18249.255: 93.4932% ( 22) 00:07:54.892 18249.255 - 18350.080: 93.7607% ( 25) 00:07:54.892 18350.080 - 18450.905: 93.9426% ( 17) 00:07:54.892 18450.905 - 18551.729: 94.1246% ( 17) 00:07:54.892 18551.729 - 18652.554: 94.2744% ( 14) 00:07:54.892 18652.554 - 18753.378: 94.4135% ( 13) 00:07:54.892 18753.378 - 18854.203: 94.5312% ( 11) 00:07:54.892 18854.203 - 18955.028: 94.6490% ( 11) 00:07:54.892 18955.028 - 19055.852: 94.7774% ( 12) 00:07:54.892 19055.852 - 19156.677: 94.9700% ( 18) 00:07:54.892 19156.677 - 19257.502: 95.1092% ( 13) 00:07:54.892 19257.502 - 19358.326: 95.2055% ( 9) 00:07:54.892 19358.326 - 19459.151: 95.3767% ( 16) 00:07:54.892 19459.151 - 19559.975: 95.5051% ( 12) 00:07:54.892 19559.975 - 19660.800: 95.6122% ( 10) 00:07:54.892 19660.800 - 19761.625: 95.6871% ( 7) 00:07:54.892 19761.625 - 19862.449: 95.8904% ( 19) 00:07:54.892 19862.449 - 19963.274: 96.1045% ( 20) 00:07:54.892 19963.274 - 20064.098: 96.3399% ( 22) 00:07:54.892 20064.098 - 20164.923: 96.5646% ( 21) 00:07:54.892 20164.923 - 20265.748: 96.7894% ( 21) 00:07:54.892 20265.748 - 20366.572: 96.9927% ( 19) 00:07:54.892 20366.572 - 20467.397: 97.1211% ( 12) 00:07:54.892 20467.397 - 20568.222: 97.3138% ( 18) 00:07:54.892 20568.222 - 20669.046: 97.4850% ( 16) 00:07:54.892 20669.046 - 20769.871: 97.6348% ( 14) 00:07:54.892 20769.871 - 20870.695: 97.8168% ( 17) 00:07:54.892 20870.695 - 20971.520: 97.9880% ( 16) 00:07:54.892 20971.520 - 21072.345: 98.1378% ( 14) 00:07:54.892 21072.345 - 21173.169: 98.2021% ( 6) 00:07:54.892 21173.169 - 21273.994: 98.2556% ( 5) 00:07:54.892 21273.994 - 21374.818: 98.2877% ( 3) 00:07:54.892 21374.818 - 21475.643: 98.3198% ( 3) 00:07:54.892 21475.643 - 21576.468: 98.3519% ( 3) 00:07:54.892 21576.468 - 21677.292: 98.3947% ( 4) 00:07:54.892 21677.292 - 21778.117: 98.4268% ( 3) 00:07:54.892 21778.117 - 21878.942: 98.4589% ( 3) 00:07:54.892 21878.942 - 21979.766: 98.4910% ( 3) 00:07:54.892 21979.766 - 22080.591: 98.5231% ( 3) 00:07:54.892 22080.591 - 22181.415: 98.5766% ( 5) 00:07:54.892 22181.415 - 22282.240: 98.6087% ( 3) 00:07:54.892 22282.240 - 22383.065: 98.6301% ( 2) 00:07:54.892 27222.646 - 27424.295: 98.6408% ( 1) 00:07:54.892 27424.295 - 27625.945: 98.7693% ( 12) 00:07:54.892 27625.945 - 27827.594: 98.8335% ( 6) 00:07:54.892 27827.594 - 28029.243: 98.9191% ( 8) 00:07:54.893 28029.243 - 28230.892: 99.0047% ( 8) 00:07:54.893 28230.892 - 28432.542: 99.0796% ( 7) 00:07:54.893 28432.542 - 28634.191: 99.1652% ( 8) 00:07:54.893 28634.191 - 28835.840: 99.2509% ( 8) 00:07:54.893 28835.840 - 29037.489: 99.3044% ( 5) 00:07:54.893 29037.489 - 29239.138: 99.3151% ( 1) 00:07:54.893 35086.966 - 35288.615: 99.3365% ( 2) 00:07:54.893 35288.615 - 35490.265: 99.4007% ( 6) 00:07:54.893 35490.265 - 35691.914: 99.4863% ( 8) 00:07:54.893 35691.914 - 35893.563: 99.5719% ( 8) 00:07:54.893 35893.563 - 36095.212: 99.6575% ( 8) 00:07:54.893 36095.212 - 36296.862: 99.7432% ( 8) 00:07:54.893 36296.862 - 36498.511: 99.8074% ( 6) 00:07:54.893 36498.511 - 36700.160: 99.9037% ( 9) 00:07:54.893 36700.160 - 36901.809: 99.9893% ( 8) 00:07:54.893 36901.809 - 37103.458: 100.0000% ( 1) 00:07:54.893 00:07:54.893 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.893 ============================================================================== 00:07:54.893 Range in us Cumulative IO count 00:07:54.893 9023.803 - 9074.215: 0.0107% ( 1) 00:07:54.893 9074.215 - 9124.628: 0.0642% ( 5) 00:07:54.893 9124.628 - 9175.040: 0.1284% ( 6) 00:07:54.893 9175.040 - 9225.452: 0.1712% ( 4) 00:07:54.893 9225.452 - 9275.865: 0.2354% ( 6) 00:07:54.893 9275.865 - 9326.277: 0.2783% ( 4) 00:07:54.893 9326.277 - 9376.689: 0.3853% ( 10) 00:07:54.893 9376.689 - 9427.102: 0.4174% ( 3) 00:07:54.893 9427.102 - 9477.514: 0.4602% ( 4) 00:07:54.893 9477.514 - 9527.926: 0.4923% ( 3) 00:07:54.893 9527.926 - 9578.338: 0.5351% ( 4) 00:07:54.893 9578.338 - 9628.751: 0.6207% ( 8) 00:07:54.893 9628.751 - 9679.163: 0.7277% ( 10) 00:07:54.893 9679.163 - 9729.575: 0.9418% ( 20) 00:07:54.893 9729.575 - 9779.988: 1.3057% ( 34) 00:07:54.893 9779.988 - 9830.400: 1.7979% ( 46) 00:07:54.893 9830.400 - 9880.812: 2.3866% ( 55) 00:07:54.893 9880.812 - 9931.225: 3.1892% ( 75) 00:07:54.893 9931.225 - 9981.637: 3.9705% ( 73) 00:07:54.893 9981.637 - 10032.049: 4.6875% ( 67) 00:07:54.893 10032.049 - 10082.462: 5.6079% ( 86) 00:07:54.893 10082.462 - 10132.874: 6.4747% ( 81) 00:07:54.893 10132.874 - 10183.286: 7.3416% ( 81) 00:07:54.893 10183.286 - 10233.698: 8.2299% ( 83) 00:07:54.893 10233.698 - 10284.111: 9.1074% ( 82) 00:07:54.893 10284.111 - 10334.523: 9.8673% ( 71) 00:07:54.893 10334.523 - 10384.935: 10.5950% ( 68) 00:07:54.893 10384.935 - 10435.348: 11.3121% ( 67) 00:07:54.893 10435.348 - 10485.760: 12.1789% ( 81) 00:07:54.893 10485.760 - 10536.172: 13.0565% ( 82) 00:07:54.893 10536.172 - 10586.585: 14.0625% ( 94) 00:07:54.893 10586.585 - 10636.997: 14.8652% ( 75) 00:07:54.893 10636.997 - 10687.409: 15.5929% ( 68) 00:07:54.893 10687.409 - 10737.822: 16.4598% ( 81) 00:07:54.893 10737.822 - 10788.234: 17.2517% ( 74) 00:07:54.893 10788.234 - 10838.646: 18.0758% ( 77) 00:07:54.893 10838.646 - 10889.058: 18.7607% ( 64) 00:07:54.893 10889.058 - 10939.471: 19.3493% ( 55) 00:07:54.893 10939.471 - 10989.883: 20.2911% ( 88) 00:07:54.893 10989.883 - 11040.295: 21.1794% ( 83) 00:07:54.893 11040.295 - 11090.708: 21.7573% ( 54) 00:07:54.893 11090.708 - 11141.120: 22.2603% ( 47) 00:07:54.893 11141.120 - 11191.532: 22.7740% ( 48) 00:07:54.893 11191.532 - 11241.945: 23.2342% ( 43) 00:07:54.893 11241.945 - 11292.357: 23.6515% ( 39) 00:07:54.893 11292.357 - 11342.769: 24.1331% ( 45) 00:07:54.893 11342.769 - 11393.182: 24.6575% ( 49) 00:07:54.893 11393.182 - 11443.594: 25.2783% ( 58) 00:07:54.893 11443.594 - 11494.006: 25.7598% ( 45) 00:07:54.893 11494.006 - 11544.418: 26.3485% ( 55) 00:07:54.893 11544.418 - 11594.831: 26.9478% ( 56) 00:07:54.893 11594.831 - 11645.243: 27.3866% ( 41) 00:07:54.893 11645.243 - 11695.655: 27.9431% ( 52) 00:07:54.893 11695.655 - 11746.068: 28.3497% ( 38) 00:07:54.893 11746.068 - 11796.480: 28.7885% ( 41) 00:07:54.893 11796.480 - 11846.892: 29.2273% ( 41) 00:07:54.893 11846.892 - 11897.305: 29.5912% ( 34) 00:07:54.893 11897.305 - 11947.717: 30.0193% ( 40) 00:07:54.893 11947.717 - 11998.129: 30.4259% ( 38) 00:07:54.893 11998.129 - 12048.542: 30.8968% ( 44) 00:07:54.893 12048.542 - 12098.954: 31.3891% ( 46) 00:07:54.893 12098.954 - 12149.366: 31.9670% ( 54) 00:07:54.893 12149.366 - 12199.778: 32.6092% ( 60) 00:07:54.893 12199.778 - 12250.191: 33.2299% ( 58) 00:07:54.893 12250.191 - 12300.603: 33.8506% ( 58) 00:07:54.893 12300.603 - 12351.015: 34.5034% ( 61) 00:07:54.893 12351.015 - 12401.428: 35.1670% ( 62) 00:07:54.893 12401.428 - 12451.840: 35.8840% ( 67) 00:07:54.893 12451.840 - 12502.252: 36.5368% ( 61) 00:07:54.893 12502.252 - 12552.665: 37.1468% ( 57) 00:07:54.893 12552.665 - 12603.077: 37.6498% ( 47) 00:07:54.893 12603.077 - 12653.489: 38.1528% ( 47) 00:07:54.893 12653.489 - 12703.902: 38.7200% ( 53) 00:07:54.893 12703.902 - 12754.314: 39.2551% ( 50) 00:07:54.893 12754.314 - 12804.726: 39.8009% ( 51) 00:07:54.893 12804.726 - 12855.138: 40.3896% ( 55) 00:07:54.893 12855.138 - 12905.551: 40.9354% ( 51) 00:07:54.893 12905.551 - 13006.375: 42.4765% ( 144) 00:07:54.893 13006.375 - 13107.200: 43.6002% ( 105) 00:07:54.893 13107.200 - 13208.025: 45.0557% ( 136) 00:07:54.893 13208.025 - 13308.849: 47.1533% ( 196) 00:07:54.893 13308.849 - 13409.674: 48.5445% ( 130) 00:07:54.893 13409.674 - 13510.498: 50.0321% ( 139) 00:07:54.893 13510.498 - 13611.323: 51.6160% ( 148) 00:07:54.893 13611.323 - 13712.148: 52.9645% ( 126) 00:07:54.893 13712.148 - 13812.972: 54.3129% ( 126) 00:07:54.893 13812.972 - 13913.797: 55.8219% ( 141) 00:07:54.893 13913.797 - 14014.622: 57.2667% ( 135) 00:07:54.893 14014.622 - 14115.446: 58.6901% ( 133) 00:07:54.893 14115.446 - 14216.271: 60.0599% ( 128) 00:07:54.893 14216.271 - 14317.095: 61.7295% ( 156) 00:07:54.893 14317.095 - 14417.920: 63.5381% ( 169) 00:07:54.893 14417.920 - 14518.745: 64.9508% ( 132) 00:07:54.893 14518.745 - 14619.569: 66.3206% ( 128) 00:07:54.893 14619.569 - 14720.394: 67.5514% ( 115) 00:07:54.893 14720.394 - 14821.218: 68.9319% ( 129) 00:07:54.893 14821.218 - 14922.043: 70.2483% ( 123) 00:07:54.893 14922.043 - 15022.868: 71.6182% ( 128) 00:07:54.893 15022.868 - 15123.692: 72.7847% ( 109) 00:07:54.893 15123.692 - 15224.517: 73.8763% ( 102) 00:07:54.893 15224.517 - 15325.342: 75.1498% ( 119) 00:07:54.893 15325.342 - 15426.166: 76.2414% ( 102) 00:07:54.893 15426.166 - 15526.991: 77.0548% ( 76) 00:07:54.893 15526.991 - 15627.815: 77.9966% ( 88) 00:07:54.893 15627.815 - 15728.640: 79.0133% ( 95) 00:07:54.893 15728.640 - 15829.465: 79.9551% ( 88) 00:07:54.893 15829.465 - 15930.289: 81.1537% ( 112) 00:07:54.893 15930.289 - 16031.114: 82.1383% ( 92) 00:07:54.893 16031.114 - 16131.938: 82.9088% ( 72) 00:07:54.893 16131.938 - 16232.763: 83.5188% ( 57) 00:07:54.893 16232.763 - 16333.588: 83.9469% ( 40) 00:07:54.893 16333.588 - 16434.412: 84.7603% ( 76) 00:07:54.893 16434.412 - 16535.237: 85.4880% ( 68) 00:07:54.893 16535.237 - 16636.062: 86.1729% ( 64) 00:07:54.893 16636.062 - 16736.886: 86.8793% ( 66) 00:07:54.893 16736.886 - 16837.711: 87.4358% ( 52) 00:07:54.893 16837.711 - 16938.535: 88.0030% ( 53) 00:07:54.893 16938.535 - 17039.360: 88.6023% ( 56) 00:07:54.893 17039.360 - 17140.185: 89.1267% ( 49) 00:07:54.893 17140.185 - 17241.009: 89.6832% ( 52) 00:07:54.893 17241.009 - 17341.834: 90.4324% ( 70) 00:07:54.893 17341.834 - 17442.658: 90.9568% ( 49) 00:07:54.893 17442.658 - 17543.483: 91.3313% ( 35) 00:07:54.893 17543.483 - 17644.308: 91.8343% ( 47) 00:07:54.893 17644.308 - 17745.132: 92.2731% ( 41) 00:07:54.893 17745.132 - 17845.957: 92.7333% ( 43) 00:07:54.893 17845.957 - 17946.782: 93.1507% ( 39) 00:07:54.893 17946.782 - 18047.606: 93.3861% ( 22) 00:07:54.893 18047.606 - 18148.431: 93.6109% ( 21) 00:07:54.893 18148.431 - 18249.255: 93.8142% ( 19) 00:07:54.893 18249.255 - 18350.080: 94.0497% ( 22) 00:07:54.893 18350.080 - 18450.905: 94.2637% ( 20) 00:07:54.893 18450.905 - 18551.729: 94.4456% ( 17) 00:07:54.893 18551.729 - 18652.554: 94.5420% ( 9) 00:07:54.893 18652.554 - 18753.378: 94.6597% ( 11) 00:07:54.893 18753.378 - 18854.203: 94.7774% ( 11) 00:07:54.893 18854.203 - 18955.028: 94.8737% ( 9) 00:07:54.893 18955.028 - 19055.852: 94.9486% ( 7) 00:07:54.893 19055.852 - 19156.677: 95.0342% ( 8) 00:07:54.893 19156.677 - 19257.502: 95.1199% ( 8) 00:07:54.894 19257.502 - 19358.326: 95.1841% ( 6) 00:07:54.894 19358.326 - 19459.151: 95.3339% ( 14) 00:07:54.894 19459.151 - 19559.975: 95.4730% ( 13) 00:07:54.894 19559.975 - 19660.800: 95.6229% ( 14) 00:07:54.894 19660.800 - 19761.625: 95.7941% ( 16) 00:07:54.894 19761.625 - 19862.449: 95.9760% ( 17) 00:07:54.894 19862.449 - 19963.274: 96.1366% ( 15) 00:07:54.894 19963.274 - 20064.098: 96.2971% ( 15) 00:07:54.894 20064.098 - 20164.923: 96.5325% ( 22) 00:07:54.894 20164.923 - 20265.748: 96.8215% ( 27) 00:07:54.894 20265.748 - 20366.572: 97.0248% ( 19) 00:07:54.894 20366.572 - 20467.397: 97.1854% ( 15) 00:07:54.894 20467.397 - 20568.222: 97.3566% ( 16) 00:07:54.894 20568.222 - 20669.046: 97.4850% ( 12) 00:07:54.894 20669.046 - 20769.871: 97.6241% ( 13) 00:07:54.894 20769.871 - 20870.695: 97.7633% ( 13) 00:07:54.894 20870.695 - 20971.520: 97.8489% ( 8) 00:07:54.894 20971.520 - 21072.345: 97.9345% ( 8) 00:07:54.894 21072.345 - 21173.169: 97.9452% ( 1) 00:07:54.894 21273.994 - 21374.818: 97.9880% ( 4) 00:07:54.894 21374.818 - 21475.643: 98.0308% ( 4) 00:07:54.894 21475.643 - 21576.468: 98.0629% ( 3) 00:07:54.894 21576.468 - 21677.292: 98.1164% ( 5) 00:07:54.894 21677.292 - 21778.117: 98.1699% ( 5) 00:07:54.894 21778.117 - 21878.942: 98.2128% ( 4) 00:07:54.894 21878.942 - 21979.766: 98.2663% ( 5) 00:07:54.894 21979.766 - 22080.591: 98.3091% ( 4) 00:07:54.894 22080.591 - 22181.415: 98.3626% ( 5) 00:07:54.894 22181.415 - 22282.240: 98.3840% ( 2) 00:07:54.894 22282.240 - 22383.065: 98.4161% ( 3) 00:07:54.894 22383.065 - 22483.889: 98.4482% ( 3) 00:07:54.894 22483.889 - 22584.714: 98.4696% ( 2) 00:07:54.894 22584.714 - 22685.538: 98.5124% ( 4) 00:07:54.894 22685.538 - 22786.363: 98.5552% ( 4) 00:07:54.894 22786.363 - 22887.188: 98.5980% ( 4) 00:07:54.894 22887.188 - 22988.012: 98.6301% ( 3) 00:07:54.894 26214.400 - 26416.049: 98.6943% ( 6) 00:07:54.894 26416.049 - 26617.698: 98.7800% ( 8) 00:07:54.894 26617.698 - 26819.348: 98.8656% ( 8) 00:07:54.894 26819.348 - 27020.997: 98.9619% ( 9) 00:07:54.894 27020.997 - 27222.646: 99.0475% ( 8) 00:07:54.894 27222.646 - 27424.295: 99.1331% ( 8) 00:07:54.894 27424.295 - 27625.945: 99.2188% ( 8) 00:07:54.894 27625.945 - 27827.594: 99.3151% ( 9) 00:07:54.894 33675.422 - 33877.071: 99.4007% ( 8) 00:07:54.894 33877.071 - 34078.720: 99.4863% ( 8) 00:07:54.894 34078.720 - 34280.369: 99.5719% ( 8) 00:07:54.894 34280.369 - 34482.018: 99.6575% ( 8) 00:07:54.894 34482.018 - 34683.668: 99.7539% ( 9) 00:07:54.894 34683.668 - 34885.317: 99.8395% ( 8) 00:07:54.894 34885.317 - 35086.966: 99.9358% ( 9) 00:07:54.894 35086.966 - 35288.615: 100.0000% ( 6) 00:07:54.894 00:07:54.894 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.894 ============================================================================== 00:07:54.894 Range in us Cumulative IO count 00:07:54.894 9023.803 - 9074.215: 0.0107% ( 1) 00:07:54.894 9175.040 - 9225.452: 0.0428% ( 3) 00:07:54.894 9225.452 - 9275.865: 0.1498% ( 10) 00:07:54.894 9275.865 - 9326.277: 0.2676% ( 11) 00:07:54.894 9326.277 - 9376.689: 0.4067% ( 13) 00:07:54.894 9376.689 - 9427.102: 0.5244% ( 11) 00:07:54.894 9427.102 - 9477.514: 0.7277% ( 19) 00:07:54.894 9477.514 - 9527.926: 0.7920% ( 6) 00:07:54.894 9527.926 - 9578.338: 0.8669% ( 7) 00:07:54.894 9578.338 - 9628.751: 1.0060% ( 13) 00:07:54.894 9628.751 - 9679.163: 1.1237% ( 11) 00:07:54.894 9679.163 - 9729.575: 1.4020% ( 26) 00:07:54.894 9729.575 - 9779.988: 1.6695% ( 25) 00:07:54.894 9779.988 - 9830.400: 2.0334% ( 34) 00:07:54.894 9830.400 - 9880.812: 2.5257% ( 46) 00:07:54.894 9880.812 - 9931.225: 3.0715% ( 51) 00:07:54.894 9931.225 - 9981.637: 3.7885% ( 67) 00:07:54.894 9981.637 - 10032.049: 4.5270% ( 69) 00:07:54.894 10032.049 - 10082.462: 5.2119% ( 64) 00:07:54.894 10082.462 - 10132.874: 6.0467% ( 78) 00:07:54.894 10132.874 - 10183.286: 6.7209% ( 63) 00:07:54.894 10183.286 - 10233.698: 7.3523% ( 59) 00:07:54.894 10233.698 - 10284.111: 8.0265% ( 63) 00:07:54.894 10284.111 - 10334.523: 8.8185% ( 74) 00:07:54.894 10334.523 - 10384.935: 9.4713% ( 61) 00:07:54.894 10384.935 - 10435.348: 10.1455% ( 63) 00:07:54.894 10435.348 - 10485.760: 10.8840% ( 69) 00:07:54.894 10485.760 - 10536.172: 11.5368% ( 61) 00:07:54.894 10536.172 - 10586.585: 12.2967% ( 71) 00:07:54.894 10586.585 - 10636.997: 13.1314% ( 78) 00:07:54.894 10636.997 - 10687.409: 14.0090% ( 82) 00:07:54.894 10687.409 - 10737.822: 14.8973% ( 83) 00:07:54.894 10737.822 - 10788.234: 16.0103% ( 104) 00:07:54.894 10788.234 - 10838.646: 16.8878% ( 82) 00:07:54.894 10838.646 - 10889.058: 18.0544% ( 109) 00:07:54.894 10889.058 - 10939.471: 18.9961% ( 88) 00:07:54.894 10939.471 - 10989.883: 19.9807% ( 92) 00:07:54.894 10989.883 - 11040.295: 21.1152% ( 106) 00:07:54.894 11040.295 - 11090.708: 21.9499% ( 78) 00:07:54.894 11090.708 - 11141.120: 22.6777% ( 68) 00:07:54.894 11141.120 - 11191.532: 23.5231% ( 79) 00:07:54.894 11191.532 - 11241.945: 24.4114% ( 83) 00:07:54.894 11241.945 - 11292.357: 25.1712% ( 71) 00:07:54.894 11292.357 - 11342.769: 25.7812% ( 57) 00:07:54.894 11342.769 - 11393.182: 26.3913% ( 57) 00:07:54.894 11393.182 - 11443.594: 27.0227% ( 59) 00:07:54.894 11443.594 - 11494.006: 27.3759% ( 33) 00:07:54.894 11494.006 - 11544.418: 27.9110% ( 50) 00:07:54.894 11544.418 - 11594.831: 28.3069% ( 37) 00:07:54.894 11594.831 - 11645.243: 28.7243% ( 39) 00:07:54.894 11645.243 - 11695.655: 29.1417% ( 39) 00:07:54.894 11695.655 - 11746.068: 29.6982% ( 52) 00:07:54.894 11746.068 - 11796.480: 30.1156% ( 39) 00:07:54.894 11796.480 - 11846.892: 30.7042% ( 55) 00:07:54.894 11846.892 - 11897.305: 31.1644% ( 43) 00:07:54.894 11897.305 - 11947.717: 31.6353% ( 44) 00:07:54.894 11947.717 - 11998.129: 32.1169% ( 45) 00:07:54.894 11998.129 - 12048.542: 32.6948% ( 54) 00:07:54.894 12048.542 - 12098.954: 33.0586% ( 34) 00:07:54.894 12098.954 - 12149.366: 33.4332% ( 35) 00:07:54.894 12149.366 - 12199.778: 33.7864% ( 33) 00:07:54.894 12199.778 - 12250.191: 34.2466% ( 43) 00:07:54.894 12250.191 - 12300.603: 34.7389% ( 46) 00:07:54.894 12300.603 - 12351.015: 35.2954% ( 52) 00:07:54.894 12351.015 - 12401.428: 35.8412% ( 51) 00:07:54.894 12401.428 - 12451.840: 36.3870% ( 51) 00:07:54.894 12451.840 - 12502.252: 36.9435% ( 52) 00:07:54.894 12502.252 - 12552.665: 37.5000% ( 52) 00:07:54.894 12552.665 - 12603.077: 37.9709% ( 44) 00:07:54.894 12603.077 - 12653.489: 38.4525% ( 45) 00:07:54.894 12653.489 - 12703.902: 38.9555% ( 47) 00:07:54.894 12703.902 - 12754.314: 39.4906% ( 50) 00:07:54.894 12754.314 - 12804.726: 40.2076% ( 67) 00:07:54.894 12804.726 - 12855.138: 41.0424% ( 78) 00:07:54.894 12855.138 - 12905.551: 42.0698% ( 96) 00:07:54.894 12905.551 - 13006.375: 43.7928% ( 161) 00:07:54.894 13006.375 - 13107.200: 45.1734% ( 129) 00:07:54.894 13107.200 - 13208.025: 46.3827% ( 113) 00:07:54.894 13208.025 - 13308.849: 47.6991% ( 123) 00:07:54.894 13308.849 - 13409.674: 49.1010% ( 131) 00:07:54.894 13409.674 - 13510.498: 50.6742% ( 147) 00:07:54.894 13510.498 - 13611.323: 51.8622% ( 111) 00:07:54.894 13611.323 - 13712.148: 53.2748% ( 132) 00:07:54.894 13712.148 - 13812.972: 54.7624% ( 139) 00:07:54.894 13812.972 - 13913.797: 56.2286% ( 137) 00:07:54.894 13913.797 - 14014.622: 57.4486% ( 114) 00:07:54.894 14014.622 - 14115.446: 58.8078% ( 127) 00:07:54.894 14115.446 - 14216.271: 59.7175% ( 85) 00:07:54.894 14216.271 - 14317.095: 60.8198% ( 103) 00:07:54.894 14317.095 - 14417.920: 62.6177% ( 168) 00:07:54.894 14417.920 - 14518.745: 63.9769% ( 127) 00:07:54.894 14518.745 - 14619.569: 65.1434% ( 109) 00:07:54.894 14619.569 - 14720.394: 66.2778% ( 106) 00:07:54.894 14720.394 - 14821.218: 67.6156% ( 125) 00:07:54.894 14821.218 - 14922.043: 68.8784% ( 118) 00:07:54.894 14922.043 - 15022.868: 70.4409% ( 146) 00:07:54.894 15022.868 - 15123.692: 72.0890% ( 154) 00:07:54.894 15123.692 - 15224.517: 73.7479% ( 155) 00:07:54.894 15224.517 - 15325.342: 75.0000% ( 117) 00:07:54.894 15325.342 - 15426.166: 75.9953% ( 93) 00:07:54.894 15426.166 - 15526.991: 76.9692% ( 91) 00:07:54.894 15526.991 - 15627.815: 78.2106% ( 116) 00:07:54.894 15627.815 - 15728.640: 79.4735% ( 118) 00:07:54.894 15728.640 - 15829.465: 80.2119% ( 69) 00:07:54.894 15829.465 - 15930.289: 80.8968% ( 64) 00:07:54.894 15930.289 - 16031.114: 81.6246% ( 68) 00:07:54.894 16031.114 - 16131.938: 82.4058% ( 73) 00:07:54.894 16131.938 - 16232.763: 83.3476% ( 88) 00:07:54.894 16232.763 - 16333.588: 84.4285% ( 101) 00:07:54.894 16333.588 - 16434.412: 85.2526% ( 77) 00:07:54.894 16434.412 - 16535.237: 85.9589% ( 66) 00:07:54.894 16535.237 - 16636.062: 86.4405% ( 45) 00:07:54.894 16636.062 - 16736.886: 86.9007% ( 43) 00:07:54.894 16736.886 - 16837.711: 87.3930% ( 46) 00:07:54.894 16837.711 - 16938.535: 87.7783% ( 36) 00:07:54.894 16938.535 - 17039.360: 88.1528% ( 35) 00:07:54.894 17039.360 - 17140.185: 88.5167% ( 34) 00:07:54.894 17140.185 - 17241.009: 88.8485% ( 31) 00:07:54.894 17241.009 - 17341.834: 89.2337% ( 36) 00:07:54.894 17341.834 - 17442.658: 89.8438% ( 57) 00:07:54.894 17442.658 - 17543.483: 90.5501% ( 66) 00:07:54.894 17543.483 - 17644.308: 91.2671% ( 67) 00:07:54.894 17644.308 - 17745.132: 91.7594% ( 46) 00:07:54.894 17745.132 - 17845.957: 92.2624% ( 47) 00:07:54.894 17845.957 - 17946.782: 92.7119% ( 42) 00:07:54.894 17946.782 - 18047.606: 93.1079% ( 37) 00:07:54.895 18047.606 - 18148.431: 93.5253% ( 39) 00:07:54.895 18148.431 - 18249.255: 93.8891% ( 34) 00:07:54.895 18249.255 - 18350.080: 94.1567% ( 25) 00:07:54.895 18350.080 - 18450.905: 94.3707% ( 20) 00:07:54.895 18450.905 - 18551.729: 94.5848% ( 20) 00:07:54.895 18551.729 - 18652.554: 94.8095% ( 21) 00:07:54.895 18652.554 - 18753.378: 95.0235% ( 20) 00:07:54.895 18753.378 - 18854.203: 95.1734% ( 14) 00:07:54.895 18854.203 - 18955.028: 95.3125% ( 13) 00:07:54.895 18955.028 - 19055.852: 95.4516% ( 13) 00:07:54.895 19055.852 - 19156.677: 95.5801% ( 12) 00:07:54.895 19156.677 - 19257.502: 95.6657% ( 8) 00:07:54.895 19257.502 - 19358.326: 95.7513% ( 8) 00:07:54.895 19358.326 - 19459.151: 95.8583% ( 10) 00:07:54.895 19459.151 - 19559.975: 95.9760% ( 11) 00:07:54.895 19559.975 - 19660.800: 96.0830% ( 10) 00:07:54.895 19660.800 - 19761.625: 96.2115% ( 12) 00:07:54.895 19761.625 - 19862.449: 96.3613% ( 14) 00:07:54.895 19862.449 - 19963.274: 96.5218% ( 15) 00:07:54.895 19963.274 - 20064.098: 96.6824% ( 15) 00:07:54.895 20064.098 - 20164.923: 96.8750% ( 18) 00:07:54.895 20164.923 - 20265.748: 96.9927% ( 11) 00:07:54.895 20265.748 - 20366.572: 97.0783% ( 8) 00:07:54.895 20366.572 - 20467.397: 97.1747% ( 9) 00:07:54.895 20467.397 - 20568.222: 97.2710% ( 9) 00:07:54.895 20568.222 - 20669.046: 97.3566% ( 8) 00:07:54.895 20669.046 - 20769.871: 97.4422% ( 8) 00:07:54.895 20769.871 - 20870.695: 97.4957% ( 5) 00:07:54.895 20870.695 - 20971.520: 97.5706% ( 7) 00:07:54.895 20971.520 - 21072.345: 97.6455% ( 7) 00:07:54.895 21072.345 - 21173.169: 97.7419% ( 9) 00:07:54.895 21173.169 - 21273.994: 97.8168% ( 7) 00:07:54.895 21273.994 - 21374.818: 97.9131% ( 9) 00:07:54.895 21374.818 - 21475.643: 98.0308% ( 11) 00:07:54.895 21475.643 - 21576.468: 98.1592% ( 12) 00:07:54.895 21576.468 - 21677.292: 98.2770% ( 11) 00:07:54.895 21677.292 - 21778.117: 98.3412% ( 6) 00:07:54.895 21778.117 - 21878.942: 98.3840% ( 4) 00:07:54.895 21878.942 - 21979.766: 98.4268% ( 4) 00:07:54.895 21979.766 - 22080.591: 98.4696% ( 4) 00:07:54.895 22080.591 - 22181.415: 98.5231% ( 5) 00:07:54.895 22181.415 - 22282.240: 98.5552% ( 3) 00:07:54.895 22282.240 - 22383.065: 98.6087% ( 5) 00:07:54.895 22383.065 - 22483.889: 98.6301% ( 2) 00:07:54.895 26214.400 - 26416.049: 98.6836% ( 5) 00:07:54.895 26416.049 - 26617.698: 98.7693% ( 8) 00:07:54.895 26617.698 - 26819.348: 98.8549% ( 8) 00:07:54.895 26819.348 - 27020.997: 98.9512% ( 9) 00:07:54.895 27020.997 - 27222.646: 99.0368% ( 8) 00:07:54.895 27222.646 - 27424.295: 99.1224% ( 8) 00:07:54.895 27424.295 - 27625.945: 99.2080% ( 8) 00:07:54.895 27625.945 - 27827.594: 99.2937% ( 8) 00:07:54.895 27827.594 - 28029.243: 99.3151% ( 2) 00:07:54.895 32667.175 - 32868.825: 99.3365% ( 2) 00:07:54.895 32868.825 - 33070.474: 99.4221% ( 8) 00:07:54.895 33070.474 - 33272.123: 99.5077% ( 8) 00:07:54.895 33272.123 - 33473.772: 99.5826% ( 7) 00:07:54.895 33473.772 - 33675.422: 99.6682% ( 8) 00:07:54.895 33675.422 - 33877.071: 99.7539% ( 8) 00:07:54.895 33877.071 - 34078.720: 99.8395% ( 8) 00:07:54.895 34078.720 - 34280.369: 99.9358% ( 9) 00:07:54.895 34280.369 - 34482.018: 100.0000% ( 6) 00:07:54.895 00:07:54.895 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.895 ============================================================================== 00:07:54.895 Range in us Cumulative IO count 00:07:54.895 9275.865 - 9326.277: 0.0214% ( 2) 00:07:54.895 9326.277 - 9376.689: 0.0749% ( 5) 00:07:54.895 9376.689 - 9427.102: 0.1391% ( 6) 00:07:54.895 9427.102 - 9477.514: 0.1926% ( 5) 00:07:54.895 9477.514 - 9527.926: 0.2568% ( 6) 00:07:54.895 9527.926 - 9578.338: 0.4388% ( 17) 00:07:54.895 9578.338 - 9628.751: 0.5993% ( 15) 00:07:54.895 9628.751 - 9679.163: 0.8134% ( 20) 00:07:54.895 9679.163 - 9729.575: 1.1665% ( 33) 00:07:54.895 9729.575 - 9779.988: 1.5411% ( 35) 00:07:54.895 9779.988 - 9830.400: 2.0120% ( 44) 00:07:54.895 9830.400 - 9880.812: 2.3973% ( 36) 00:07:54.895 9880.812 - 9931.225: 2.8360% ( 41) 00:07:54.895 9931.225 - 9981.637: 3.4354% ( 56) 00:07:54.895 9981.637 - 10032.049: 4.0240% ( 55) 00:07:54.895 10032.049 - 10082.462: 4.7089% ( 64) 00:07:54.895 10082.462 - 10132.874: 5.1584% ( 42) 00:07:54.895 10132.874 - 10183.286: 5.8219% ( 62) 00:07:54.895 10183.286 - 10233.698: 6.4747% ( 61) 00:07:54.895 10233.698 - 10284.111: 7.1276% ( 61) 00:07:54.895 10284.111 - 10334.523: 7.7055% ( 54) 00:07:54.895 10334.523 - 10384.935: 8.4225% ( 67) 00:07:54.895 10384.935 - 10435.348: 9.1074% ( 64) 00:07:54.895 10435.348 - 10485.760: 9.7175% ( 57) 00:07:54.895 10485.760 - 10536.172: 10.4452% ( 68) 00:07:54.895 10536.172 - 10586.585: 11.1087% ( 62) 00:07:54.895 10586.585 - 10636.997: 11.9756% ( 81) 00:07:54.895 10636.997 - 10687.409: 12.8532% ( 82) 00:07:54.895 10687.409 - 10737.822: 13.7628% ( 85) 00:07:54.895 10737.822 - 10788.234: 14.6404% ( 82) 00:07:54.895 10788.234 - 10838.646: 15.4431% ( 75) 00:07:54.895 10838.646 - 10889.058: 16.2136% ( 72) 00:07:54.895 10889.058 - 10939.471: 17.3266% ( 104) 00:07:54.895 10939.471 - 10989.883: 18.2577% ( 87) 00:07:54.895 10989.883 - 11040.295: 19.3600% ( 103) 00:07:54.895 11040.295 - 11090.708: 20.3446% ( 92) 00:07:54.895 11090.708 - 11141.120: 21.0188% ( 63) 00:07:54.895 11141.120 - 11191.532: 21.6824% ( 62) 00:07:54.895 11191.532 - 11241.945: 22.3780% ( 65) 00:07:54.895 11241.945 - 11292.357: 23.2556% ( 82) 00:07:54.895 11292.357 - 11342.769: 24.0154% ( 71) 00:07:54.895 11342.769 - 11393.182: 24.8395% ( 77) 00:07:54.895 11393.182 - 11443.594: 25.5137% ( 63) 00:07:54.895 11443.594 - 11494.006: 26.1772% ( 62) 00:07:54.895 11494.006 - 11544.418: 26.7551% ( 54) 00:07:54.895 11544.418 - 11594.831: 27.4829% ( 68) 00:07:54.895 11594.831 - 11645.243: 28.0715% ( 55) 00:07:54.895 11645.243 - 11695.655: 28.5745% ( 47) 00:07:54.895 11695.655 - 11746.068: 29.0882% ( 48) 00:07:54.895 11746.068 - 11796.480: 29.5805% ( 46) 00:07:54.895 11796.480 - 11846.892: 30.1905% ( 57) 00:07:54.895 11846.892 - 11897.305: 30.6828% ( 46) 00:07:54.895 11897.305 - 11947.717: 31.0788% ( 37) 00:07:54.895 11947.717 - 11998.129: 31.4961% ( 39) 00:07:54.895 11998.129 - 12048.542: 32.0848% ( 55) 00:07:54.895 12048.542 - 12098.954: 32.6199% ( 50) 00:07:54.895 12098.954 - 12149.366: 33.0908% ( 44) 00:07:54.895 12149.366 - 12199.778: 33.5616% ( 44) 00:07:54.895 12199.778 - 12250.191: 33.9897% ( 40) 00:07:54.895 12250.191 - 12300.603: 34.5248% ( 50) 00:07:54.895 12300.603 - 12351.015: 35.2098% ( 64) 00:07:54.895 12351.015 - 12401.428: 36.4298% ( 114) 00:07:54.895 12401.428 - 12451.840: 37.0612% ( 59) 00:07:54.895 12451.840 - 12502.252: 37.9709% ( 85) 00:07:54.895 12502.252 - 12552.665: 38.8592% ( 83) 00:07:54.895 12552.665 - 12603.077: 39.7688% ( 85) 00:07:54.895 12603.077 - 12653.489: 40.5073% ( 69) 00:07:54.895 12653.489 - 12703.902: 41.1173% ( 57) 00:07:54.895 12703.902 - 12754.314: 41.7808% ( 62) 00:07:54.895 12754.314 - 12804.726: 42.3694% ( 55) 00:07:54.895 12804.726 - 12855.138: 43.1079% ( 69) 00:07:54.895 12855.138 - 12905.551: 43.6537% ( 51) 00:07:54.895 12905.551 - 13006.375: 45.1306% ( 138) 00:07:54.895 13006.375 - 13107.200: 46.2757% ( 107) 00:07:54.895 13107.200 - 13208.025: 47.2603% ( 92) 00:07:54.895 13208.025 - 13308.849: 48.4589% ( 112) 00:07:54.895 13308.849 - 13409.674: 49.7860% ( 124) 00:07:54.895 13409.674 - 13510.498: 50.8990% ( 104) 00:07:54.895 13510.498 - 13611.323: 52.1404% ( 116) 00:07:54.895 13611.323 - 13712.148: 53.2427% ( 103) 00:07:54.895 13712.148 - 13812.972: 54.5698% ( 124) 00:07:54.895 13812.972 - 13913.797: 55.7149% ( 107) 00:07:54.895 13913.797 - 14014.622: 56.8921% ( 110) 00:07:54.895 14014.622 - 14115.446: 58.2620% ( 128) 00:07:54.895 14115.446 - 14216.271: 59.5676% ( 122) 00:07:54.895 14216.271 - 14317.095: 61.3228% ( 164) 00:07:54.895 14317.095 - 14417.920: 63.0886% ( 165) 00:07:54.895 14417.920 - 14518.745: 64.8759% ( 167) 00:07:54.895 14518.745 - 14619.569: 67.1875% ( 216) 00:07:54.895 14619.569 - 14720.394: 68.9105% ( 161) 00:07:54.895 14720.394 - 14821.218: 70.0449% ( 106) 00:07:54.895 14821.218 - 14922.043: 71.2436% ( 112) 00:07:54.895 14922.043 - 15022.868: 72.2817% ( 97) 00:07:54.895 15022.868 - 15123.692: 73.1914% ( 85) 00:07:54.895 15123.692 - 15224.517: 74.1010% ( 85) 00:07:54.895 15224.517 - 15325.342: 74.9679% ( 81) 00:07:54.895 15325.342 - 15426.166: 76.2735% ( 122) 00:07:54.895 15426.166 - 15526.991: 77.5150% ( 116) 00:07:54.895 15526.991 - 15627.815: 78.8099% ( 121) 00:07:54.895 15627.815 - 15728.640: 80.1584% ( 126) 00:07:54.895 15728.640 - 15829.465: 81.3784% ( 114) 00:07:54.895 15829.465 - 15930.289: 82.1811% ( 75) 00:07:54.895 15930.289 - 16031.114: 83.0693% ( 83) 00:07:54.895 16031.114 - 16131.938: 83.8506% ( 73) 00:07:54.895 16131.938 - 16232.763: 84.4499% ( 56) 00:07:54.895 16232.763 - 16333.588: 84.8994% ( 42) 00:07:54.895 16333.588 - 16434.412: 85.3061% ( 38) 00:07:54.895 16434.412 - 16535.237: 85.7984% ( 46) 00:07:54.895 16535.237 - 16636.062: 86.1622% ( 34) 00:07:54.895 16636.062 - 16736.886: 86.5475% ( 36) 00:07:54.895 16736.886 - 16837.711: 86.9435% ( 37) 00:07:54.895 16837.711 - 16938.535: 87.3609% ( 39) 00:07:54.895 16938.535 - 17039.360: 87.9388% ( 54) 00:07:54.895 17039.360 - 17140.185: 88.6772% ( 69) 00:07:54.895 17140.185 - 17241.009: 89.1909% ( 48) 00:07:54.895 17241.009 - 17341.834: 89.5976% ( 38) 00:07:54.895 17341.834 - 17442.658: 89.9080% ( 29) 00:07:54.895 17442.658 - 17543.483: 90.1541% ( 23) 00:07:54.895 17543.483 - 17644.308: 90.4003% ( 23) 00:07:54.895 17644.308 - 17745.132: 90.7106% ( 29) 00:07:54.895 17745.132 - 17845.957: 91.0103% ( 28) 00:07:54.896 17845.957 - 17946.782: 91.3741% ( 34) 00:07:54.896 17946.782 - 18047.606: 91.7059% ( 31) 00:07:54.896 18047.606 - 18148.431: 92.2089% ( 47) 00:07:54.896 18148.431 - 18249.255: 92.6584% ( 42) 00:07:54.896 18249.255 - 18350.080: 93.0651% ( 38) 00:07:54.896 18350.080 - 18450.905: 93.4289% ( 34) 00:07:54.896 18450.905 - 18551.729: 93.7500% ( 30) 00:07:54.896 18551.729 - 18652.554: 94.0711% ( 30) 00:07:54.896 18652.554 - 18753.378: 94.3386% ( 25) 00:07:54.896 18753.378 - 18854.203: 94.6062% ( 25) 00:07:54.896 18854.203 - 18955.028: 94.8416% ( 22) 00:07:54.896 18955.028 - 19055.852: 95.0664% ( 21) 00:07:54.896 19055.852 - 19156.677: 95.3125% ( 23) 00:07:54.896 19156.677 - 19257.502: 95.4195% ( 10) 00:07:54.896 19257.502 - 19358.326: 95.5586% ( 13) 00:07:54.896 19358.326 - 19459.151: 95.7620% ( 19) 00:07:54.896 19459.151 - 19559.975: 95.9439% ( 17) 00:07:54.896 19559.975 - 19660.800: 96.1901% ( 23) 00:07:54.896 19660.800 - 19761.625: 96.3078% ( 11) 00:07:54.896 19761.625 - 19862.449: 96.3934% ( 8) 00:07:54.896 19862.449 - 19963.274: 96.5004% ( 10) 00:07:54.896 19963.274 - 20064.098: 96.5967% ( 9) 00:07:54.896 20064.098 - 20164.923: 96.7038% ( 10) 00:07:54.896 20164.923 - 20265.748: 96.7787% ( 7) 00:07:54.896 20265.748 - 20366.572: 96.8429% ( 6) 00:07:54.896 20366.572 - 20467.397: 96.9392% ( 9) 00:07:54.896 20467.397 - 20568.222: 97.1318% ( 18) 00:07:54.896 20568.222 - 20669.046: 97.2603% ( 12) 00:07:54.896 20669.046 - 20769.871: 97.4101% ( 14) 00:07:54.896 20769.871 - 20870.695: 97.5385% ( 12) 00:07:54.896 20870.695 - 20971.520: 97.6455% ( 10) 00:07:54.896 20971.520 - 21072.345: 97.7526% ( 10) 00:07:54.896 21072.345 - 21173.169: 97.8275% ( 7) 00:07:54.896 21173.169 - 21273.994: 97.9131% ( 8) 00:07:54.896 21273.994 - 21374.818: 98.0415% ( 12) 00:07:54.896 21374.818 - 21475.643: 98.1164% ( 7) 00:07:54.896 21475.643 - 21576.468: 98.1914% ( 7) 00:07:54.896 21576.468 - 21677.292: 98.2663% ( 7) 00:07:54.896 21677.292 - 21778.117: 98.3412% ( 7) 00:07:54.896 21778.117 - 21878.942: 98.4054% ( 6) 00:07:54.896 21878.942 - 21979.766: 98.4803% ( 7) 00:07:54.896 21979.766 - 22080.591: 98.5445% ( 6) 00:07:54.896 22080.591 - 22181.415: 98.6194% ( 7) 00:07:54.896 22181.415 - 22282.240: 98.6301% ( 1) 00:07:54.896 24702.031 - 24802.855: 98.6515% ( 2) 00:07:54.896 24802.855 - 24903.680: 98.6836% ( 3) 00:07:54.896 24903.680 - 25004.505: 98.7265% ( 4) 00:07:54.896 25004.505 - 25105.329: 98.7800% ( 5) 00:07:54.896 25105.329 - 25206.154: 98.8228% ( 4) 00:07:54.896 25206.154 - 25306.978: 98.8656% ( 4) 00:07:54.896 25306.978 - 25407.803: 98.9084% ( 4) 00:07:54.896 25407.803 - 25508.628: 98.9512% ( 4) 00:07:54.896 25508.628 - 25609.452: 98.9940% ( 4) 00:07:54.896 25609.452 - 25710.277: 99.0475% ( 5) 00:07:54.896 25710.277 - 25811.102: 99.0903% ( 4) 00:07:54.896 25811.102 - 26012.751: 99.1759% ( 8) 00:07:54.896 26012.751 - 26214.400: 99.2616% ( 8) 00:07:54.896 26214.400 - 26416.049: 99.3151% ( 5) 00:07:54.896 31255.631 - 31457.280: 99.4007% ( 8) 00:07:54.896 31457.280 - 31658.929: 99.4863% ( 8) 00:07:54.896 31658.929 - 31860.578: 99.5826% ( 9) 00:07:54.896 31860.578 - 32062.228: 99.6682% ( 8) 00:07:54.896 32062.228 - 32263.877: 99.7646% ( 9) 00:07:54.896 32263.877 - 32465.526: 99.8502% ( 8) 00:07:54.896 32465.526 - 32667.175: 99.9358% ( 8) 00:07:54.896 32667.175 - 32868.825: 100.0000% ( 6) 00:07:54.896 00:07:54.896 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.896 ============================================================================== 00:07:54.896 Range in us Cumulative IO count 00:07:54.896 8872.566 - 8922.978: 0.0107% ( 1) 00:07:54.896 9074.215 - 9124.628: 0.0428% ( 3) 00:07:54.896 9124.628 - 9175.040: 0.0856% ( 4) 00:07:54.896 9175.040 - 9225.452: 0.1070% ( 2) 00:07:54.896 9225.452 - 9275.865: 0.2140% ( 10) 00:07:54.896 9275.865 - 9326.277: 0.4388% ( 21) 00:07:54.896 9326.277 - 9376.689: 0.5565% ( 11) 00:07:54.896 9376.689 - 9427.102: 0.6207% ( 6) 00:07:54.896 9427.102 - 9477.514: 0.6742% ( 5) 00:07:54.896 9477.514 - 9527.926: 0.8883% ( 20) 00:07:54.896 9527.926 - 9578.338: 1.0060% ( 11) 00:07:54.896 9578.338 - 9628.751: 1.1130% ( 10) 00:07:54.896 9628.751 - 9679.163: 1.2200% ( 10) 00:07:54.896 9679.163 - 9729.575: 1.4876% ( 25) 00:07:54.896 9729.575 - 9779.988: 1.7658% ( 26) 00:07:54.896 9779.988 - 9830.400: 2.1618% ( 37) 00:07:54.896 9830.400 - 9880.812: 2.6648% ( 47) 00:07:54.896 9880.812 - 9931.225: 3.2213% ( 52) 00:07:54.896 9931.225 - 9981.637: 3.8741% ( 61) 00:07:54.896 9981.637 - 10032.049: 4.4628% ( 55) 00:07:54.896 10032.049 - 10082.462: 4.9658% ( 47) 00:07:54.896 10082.462 - 10132.874: 5.5437% ( 54) 00:07:54.896 10132.874 - 10183.286: 6.4426% ( 84) 00:07:54.896 10183.286 - 10233.698: 7.1276% ( 64) 00:07:54.896 10233.698 - 10284.111: 7.8125% ( 64) 00:07:54.896 10284.111 - 10334.523: 8.5188% ( 66) 00:07:54.896 10334.523 - 10384.935: 9.2359% ( 67) 00:07:54.896 10384.935 - 10435.348: 9.9529% ( 67) 00:07:54.896 10435.348 - 10485.760: 10.7556% ( 75) 00:07:54.896 10485.760 - 10536.172: 11.4512% ( 65) 00:07:54.896 10536.172 - 10586.585: 12.2539% ( 75) 00:07:54.896 10586.585 - 10636.997: 12.8425% ( 55) 00:07:54.896 10636.997 - 10687.409: 13.3776% ( 50) 00:07:54.896 10687.409 - 10737.822: 14.1267% ( 70) 00:07:54.896 10737.822 - 10788.234: 14.8116% ( 64) 00:07:54.896 10788.234 - 10838.646: 15.4645% ( 61) 00:07:54.896 10838.646 - 10889.058: 16.2457% ( 73) 00:07:54.896 10889.058 - 10939.471: 17.0805% ( 78) 00:07:54.896 10939.471 - 10989.883: 17.8831% ( 75) 00:07:54.896 10989.883 - 11040.295: 18.7072% ( 77) 00:07:54.896 11040.295 - 11090.708: 19.6918% ( 92) 00:07:54.896 11090.708 - 11141.120: 20.4730% ( 73) 00:07:54.896 11141.120 - 11191.532: 21.2757% ( 75) 00:07:54.896 11191.532 - 11241.945: 22.1104% ( 78) 00:07:54.896 11241.945 - 11292.357: 23.0736% ( 90) 00:07:54.896 11292.357 - 11342.769: 23.7907% ( 67) 00:07:54.896 11342.769 - 11393.182: 24.4114% ( 58) 00:07:54.896 11393.182 - 11443.594: 25.0214% ( 57) 00:07:54.896 11443.594 - 11494.006: 25.8027% ( 73) 00:07:54.896 11494.006 - 11544.418: 26.6374% ( 78) 00:07:54.896 11544.418 - 11594.831: 27.3652% ( 68) 00:07:54.896 11594.831 - 11645.243: 28.1678% ( 75) 00:07:54.896 11645.243 - 11695.655: 28.7992% ( 59) 00:07:54.896 11695.655 - 11746.068: 29.4199% ( 58) 00:07:54.896 11746.068 - 11796.480: 30.1156% ( 65) 00:07:54.896 11796.480 - 11846.892: 30.8326% ( 67) 00:07:54.897 11846.892 - 11897.305: 31.4747% ( 60) 00:07:54.897 11897.305 - 11947.717: 31.8386% ( 34) 00:07:54.897 11947.717 - 11998.129: 32.3202% ( 45) 00:07:54.897 11998.129 - 12048.542: 32.7376% ( 39) 00:07:54.897 12048.542 - 12098.954: 33.2192% ( 45) 00:07:54.897 12098.954 - 12149.366: 33.5938% ( 35) 00:07:54.897 12149.366 - 12199.778: 33.9683% ( 35) 00:07:54.897 12199.778 - 12250.191: 34.5141% ( 51) 00:07:54.897 12250.191 - 12300.603: 34.9636% ( 42) 00:07:54.897 12300.603 - 12351.015: 35.3917% ( 40) 00:07:54.897 12351.015 - 12401.428: 35.9482% ( 52) 00:07:54.897 12401.428 - 12451.840: 36.6759% ( 68) 00:07:54.897 12451.840 - 12502.252: 37.3288% ( 61) 00:07:54.897 12502.252 - 12552.665: 38.0993% ( 72) 00:07:54.897 12552.665 - 12603.077: 38.9662% ( 81) 00:07:54.897 12603.077 - 12653.489: 39.6190% ( 61) 00:07:54.897 12653.489 - 12703.902: 40.2076% ( 55) 00:07:54.897 12703.902 - 12754.314: 40.9568% ( 70) 00:07:54.897 12754.314 - 12804.726: 41.5026% ( 51) 00:07:54.897 12804.726 - 12855.138: 42.1019% ( 56) 00:07:54.897 12855.138 - 12905.551: 42.6477% ( 51) 00:07:54.897 12905.551 - 13006.375: 43.7714% ( 105) 00:07:54.897 13006.375 - 13107.200: 45.1413% ( 128) 00:07:54.897 13107.200 - 13208.025: 46.4790% ( 125) 00:07:54.897 13208.025 - 13308.849: 47.9880% ( 141) 00:07:54.897 13308.849 - 13409.674: 49.5505% ( 146) 00:07:54.897 13409.674 - 13510.498: 51.0488% ( 140) 00:07:54.897 13510.498 - 13611.323: 52.3330% ( 120) 00:07:54.897 13611.323 - 13712.148: 53.6387% ( 122) 00:07:54.897 13712.148 - 13812.972: 54.8373% ( 112) 00:07:54.897 13812.972 - 13913.797: 55.8861% ( 98) 00:07:54.897 13913.797 - 14014.622: 57.2025% ( 123) 00:07:54.897 14014.622 - 14115.446: 58.6580% ( 136) 00:07:54.897 14115.446 - 14216.271: 59.8673% ( 113) 00:07:54.897 14216.271 - 14317.095: 61.4726% ( 150) 00:07:54.897 14317.095 - 14417.920: 62.6819% ( 113) 00:07:54.897 14417.920 - 14518.745: 64.1481% ( 137) 00:07:54.897 14518.745 - 14619.569: 65.6036% ( 136) 00:07:54.897 14619.569 - 14720.394: 67.3694% ( 165) 00:07:54.897 14720.394 - 14821.218: 69.1032% ( 162) 00:07:54.897 14821.218 - 14922.043: 70.6764% ( 147) 00:07:54.897 14922.043 - 15022.868: 71.9713% ( 121) 00:07:54.897 15022.868 - 15123.692: 73.3519% ( 129) 00:07:54.897 15123.692 - 15224.517: 74.3365% ( 92) 00:07:54.897 15224.517 - 15325.342: 75.2676% ( 87) 00:07:54.897 15325.342 - 15426.166: 76.5304% ( 118) 00:07:54.897 15426.166 - 15526.991: 77.5578% ( 96) 00:07:54.897 15526.991 - 15627.815: 78.5852% ( 96) 00:07:54.897 15627.815 - 15728.640: 79.8801% ( 121) 00:07:54.897 15728.640 - 15829.465: 80.7791% ( 84) 00:07:54.897 15829.465 - 15930.289: 81.4533% ( 63) 00:07:54.897 15930.289 - 16031.114: 82.2774% ( 77) 00:07:54.897 16031.114 - 16131.938: 82.9302% ( 61) 00:07:54.897 16131.938 - 16232.763: 83.7008% ( 72) 00:07:54.897 16232.763 - 16333.588: 84.3857% ( 64) 00:07:54.897 16333.588 - 16434.412: 85.1455% ( 71) 00:07:54.897 16434.412 - 16535.237: 85.8091% ( 62) 00:07:54.897 16535.237 - 16636.062: 86.6545% ( 79) 00:07:54.897 16636.062 - 16736.886: 87.2753% ( 58) 00:07:54.897 16736.886 - 16837.711: 87.6819% ( 38) 00:07:54.897 16837.711 - 16938.535: 88.1100% ( 40) 00:07:54.897 16938.535 - 17039.360: 88.5916% ( 45) 00:07:54.897 17039.360 - 17140.185: 89.0197% ( 40) 00:07:54.897 17140.185 - 17241.009: 89.3836% ( 34) 00:07:54.897 17241.009 - 17341.834: 89.7046% ( 30) 00:07:54.897 17341.834 - 17442.658: 90.0578% ( 33) 00:07:54.897 17442.658 - 17543.483: 90.3682% ( 29) 00:07:54.897 17543.483 - 17644.308: 90.8069% ( 41) 00:07:54.897 17644.308 - 17745.132: 91.2671% ( 43) 00:07:54.897 17745.132 - 17845.957: 91.6417% ( 35) 00:07:54.897 17845.957 - 17946.782: 92.1554% ( 48) 00:07:54.897 17946.782 - 18047.606: 92.4658% ( 29) 00:07:54.897 18047.606 - 18148.431: 92.6905% ( 21) 00:07:54.897 18148.431 - 18249.255: 92.9259% ( 22) 00:07:54.897 18249.255 - 18350.080: 93.1721% ( 23) 00:07:54.897 18350.080 - 18450.905: 93.4182% ( 23) 00:07:54.897 18450.905 - 18551.729: 93.6430% ( 21) 00:07:54.897 18551.729 - 18652.554: 93.8570% ( 20) 00:07:54.897 18652.554 - 18753.378: 94.0497% ( 18) 00:07:54.897 18753.378 - 18854.203: 94.2209% ( 16) 00:07:54.897 18854.203 - 18955.028: 94.3814% ( 15) 00:07:54.897 18955.028 - 19055.852: 94.6383% ( 24) 00:07:54.897 19055.852 - 19156.677: 94.8737% ( 22) 00:07:54.897 19156.677 - 19257.502: 95.1520% ( 26) 00:07:54.897 19257.502 - 19358.326: 95.4195% ( 25) 00:07:54.897 19358.326 - 19459.151: 95.7299% ( 29) 00:07:54.897 19459.151 - 19559.975: 95.9653% ( 22) 00:07:54.897 19559.975 - 19660.800: 96.2543% ( 27) 00:07:54.897 19660.800 - 19761.625: 96.4362% ( 17) 00:07:54.897 19761.625 - 19862.449: 96.6717% ( 22) 00:07:54.897 19862.449 - 19963.274: 96.8001% ( 12) 00:07:54.897 19963.274 - 20064.098: 96.9178% ( 11) 00:07:54.897 20064.098 - 20164.923: 97.0248% ( 10) 00:07:54.897 20164.923 - 20265.748: 97.1104% ( 8) 00:07:54.897 20265.748 - 20366.572: 97.1961% ( 8) 00:07:54.897 20366.572 - 20467.397: 97.2603% ( 6) 00:07:54.897 20769.871 - 20870.695: 97.3138% ( 5) 00:07:54.897 20870.695 - 20971.520: 97.4101% ( 9) 00:07:54.897 20971.520 - 21072.345: 97.4957% ( 8) 00:07:54.897 21072.345 - 21173.169: 97.5920% ( 9) 00:07:54.897 21173.169 - 21273.994: 97.6670% ( 7) 00:07:54.897 21273.994 - 21374.818: 97.7205% ( 5) 00:07:54.897 21374.818 - 21475.643: 97.7740% ( 5) 00:07:54.897 21475.643 - 21576.468: 97.8275% ( 5) 00:07:54.897 21576.468 - 21677.292: 97.8810% ( 5) 00:07:54.897 21677.292 - 21778.117: 97.9345% ( 5) 00:07:54.897 21778.117 - 21878.942: 97.9452% ( 1) 00:07:54.897 21878.942 - 21979.766: 97.9559% ( 1) 00:07:54.897 21979.766 - 22080.591: 98.0308% ( 7) 00:07:54.897 22080.591 - 22181.415: 98.1057% ( 7) 00:07:54.897 22181.415 - 22282.240: 98.1807% ( 7) 00:07:54.897 22282.240 - 22383.065: 98.2449% ( 6) 00:07:54.897 22383.065 - 22483.889: 98.3198% ( 7) 00:07:54.897 22483.889 - 22584.714: 98.3947% ( 7) 00:07:54.897 22584.714 - 22685.538: 98.4696% ( 7) 00:07:54.897 22685.538 - 22786.363: 98.5445% ( 7) 00:07:54.897 22786.363 - 22887.188: 98.6194% ( 7) 00:07:54.897 22887.188 - 22988.012: 98.6301% ( 1) 00:07:54.897 23290.486 - 23391.311: 98.6408% ( 1) 00:07:54.897 23391.311 - 23492.135: 98.6836% ( 4) 00:07:54.897 23492.135 - 23592.960: 98.7265% ( 4) 00:07:54.897 23592.960 - 23693.785: 98.7800% ( 5) 00:07:54.897 23693.785 - 23794.609: 98.8228% ( 4) 00:07:54.897 23794.609 - 23895.434: 98.8656% ( 4) 00:07:54.897 23895.434 - 23996.258: 98.9084% ( 4) 00:07:54.897 23996.258 - 24097.083: 98.9512% ( 4) 00:07:54.897 24097.083 - 24197.908: 98.9940% ( 4) 00:07:54.897 24197.908 - 24298.732: 99.0368% ( 4) 00:07:54.897 24298.732 - 24399.557: 99.0903% ( 5) 00:07:54.897 24399.557 - 24500.382: 99.1331% ( 4) 00:07:54.897 24500.382 - 24601.206: 99.1759% ( 4) 00:07:54.897 24601.206 - 24702.031: 99.2188% ( 4) 00:07:54.897 24702.031 - 24802.855: 99.2616% ( 4) 00:07:54.897 24802.855 - 24903.680: 99.3151% ( 5) 00:07:54.897 29642.437 - 29844.086: 99.3686% ( 5) 00:07:54.897 29844.086 - 30045.735: 99.4435% ( 7) 00:07:54.897 30045.735 - 30247.385: 99.5291% ( 8) 00:07:54.897 30247.385 - 30449.034: 99.6147% ( 8) 00:07:54.897 30449.034 - 30650.683: 99.7003% ( 8) 00:07:54.897 30650.683 - 30852.332: 99.7967% ( 9) 00:07:54.897 30852.332 - 31053.982: 99.8823% ( 8) 00:07:54.897 31053.982 - 31255.631: 99.9786% ( 9) 00:07:54.897 31255.631 - 31457.280: 100.0000% ( 2) 00:07:54.897 00:07:54.897 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.897 ============================================================================== 00:07:54.897 Range in us Cumulative IO count 00:07:54.897 9175.040 - 9225.452: 0.0531% ( 5) 00:07:54.897 9225.452 - 9275.865: 0.1488% ( 9) 00:07:54.897 9275.865 - 9326.277: 0.2232% ( 7) 00:07:54.897 9326.277 - 9376.689: 0.3827% ( 15) 00:07:54.897 9376.689 - 9427.102: 0.4677% ( 8) 00:07:54.897 9427.102 - 9477.514: 0.5421% ( 7) 00:07:54.897 9477.514 - 9527.926: 0.6271% ( 8) 00:07:54.897 9527.926 - 9578.338: 0.7122% ( 8) 00:07:54.897 9578.338 - 9628.751: 0.8397% ( 12) 00:07:54.897 9628.751 - 9679.163: 1.1586% ( 30) 00:07:54.897 9679.163 - 9729.575: 1.4668% ( 29) 00:07:54.897 9729.575 - 9779.988: 1.8282% ( 34) 00:07:54.897 9779.988 - 9830.400: 2.2640% ( 41) 00:07:54.897 9830.400 - 9880.812: 2.7955% ( 50) 00:07:54.897 9880.812 - 9931.225: 3.4651% ( 63) 00:07:54.897 9931.225 - 9981.637: 4.0285% ( 53) 00:07:54.897 9981.637 - 10032.049: 4.5812% ( 52) 00:07:54.897 10032.049 - 10082.462: 5.3253% ( 70) 00:07:54.897 10082.462 - 10132.874: 6.0162% ( 65) 00:07:54.897 10132.874 - 10183.286: 6.7071% ( 65) 00:07:54.897 10183.286 - 10233.698: 7.5999% ( 84) 00:07:54.897 10233.698 - 10284.111: 8.4609% ( 81) 00:07:54.897 10284.111 - 10334.523: 9.5132% ( 99) 00:07:54.897 10334.523 - 10384.935: 10.2572% ( 70) 00:07:54.897 10384.935 - 10435.348: 11.0438% ( 74) 00:07:54.897 10435.348 - 10485.760: 11.5859% ( 51) 00:07:54.897 10485.760 - 10536.172: 12.2980% ( 67) 00:07:54.897 10536.172 - 10586.585: 12.9252% ( 59) 00:07:54.897 10586.585 - 10636.997: 13.4885% ( 53) 00:07:54.897 10636.997 - 10687.409: 14.0944% ( 57) 00:07:54.897 10687.409 - 10737.822: 14.8597% ( 72) 00:07:54.897 10737.822 - 10788.234: 15.3912% ( 50) 00:07:54.897 10788.234 - 10838.646: 15.9226% ( 50) 00:07:54.897 10838.646 - 10889.058: 16.5923% ( 63) 00:07:54.897 10889.058 - 10939.471: 17.3363% ( 70) 00:07:54.897 10939.471 - 10989.883: 18.1866% ( 80) 00:07:54.897 10989.883 - 11040.295: 19.0795% ( 84) 00:07:54.897 11040.295 - 11090.708: 20.1105% ( 97) 00:07:54.898 11090.708 - 11141.120: 20.8971% ( 74) 00:07:54.898 11141.120 - 11191.532: 21.6624% ( 72) 00:07:54.898 11191.532 - 11241.945: 22.5128% ( 80) 00:07:54.898 11241.945 - 11292.357: 23.2887% ( 73) 00:07:54.898 11292.357 - 11342.769: 24.0115% ( 68) 00:07:54.898 11342.769 - 11393.182: 24.8087% ( 75) 00:07:54.898 11393.182 - 11443.594: 25.4783% ( 63) 00:07:54.898 11443.594 - 11494.006: 25.9566% ( 45) 00:07:54.898 11494.006 - 11544.418: 26.3924% ( 41) 00:07:54.898 11544.418 - 11594.831: 27.0089% ( 58) 00:07:54.898 11594.831 - 11645.243: 27.6361% ( 59) 00:07:54.898 11645.243 - 11695.655: 28.1781% ( 51) 00:07:54.898 11695.655 - 11746.068: 28.9116% ( 69) 00:07:54.898 11746.068 - 11796.480: 29.4962% ( 55) 00:07:54.898 11796.480 - 11846.892: 30.1233% ( 59) 00:07:54.898 11846.892 - 11897.305: 30.8567% ( 69) 00:07:54.898 11897.305 - 11947.717: 31.4307% ( 54) 00:07:54.898 11947.717 - 11998.129: 32.0259% ( 56) 00:07:54.898 11998.129 - 12048.542: 32.5787% ( 52) 00:07:54.898 12048.542 - 12098.954: 33.1207% ( 51) 00:07:54.898 12098.954 - 12149.366: 33.6203% ( 47) 00:07:54.898 12149.366 - 12199.778: 34.3325% ( 67) 00:07:54.898 12199.778 - 12250.191: 34.8427% ( 48) 00:07:54.898 12250.191 - 12300.603: 35.3104% ( 44) 00:07:54.898 12300.603 - 12351.015: 35.7674% ( 43) 00:07:54.898 12351.015 - 12401.428: 36.2245% ( 43) 00:07:54.898 12401.428 - 12451.840: 36.7772% ( 52) 00:07:54.898 12451.840 - 12502.252: 37.5850% ( 76) 00:07:54.898 12502.252 - 12552.665: 38.2015% ( 58) 00:07:54.898 12552.665 - 12603.077: 38.7330% ( 50) 00:07:54.898 12603.077 - 12653.489: 39.4345% ( 66) 00:07:54.898 12653.489 - 12703.902: 40.0935% ( 62) 00:07:54.898 12703.902 - 12754.314: 40.6569% ( 53) 00:07:54.898 12754.314 - 12804.726: 41.1352% ( 45) 00:07:54.898 12804.726 - 12855.138: 41.7198% ( 55) 00:07:54.898 12855.138 - 12905.551: 42.4001% ( 64) 00:07:54.898 12905.551 - 13006.375: 44.3452% ( 183) 00:07:54.898 13006.375 - 13107.200: 45.4507% ( 104) 00:07:54.898 13107.200 - 13208.025: 46.5668% ( 105) 00:07:54.898 13208.025 - 13308.849: 48.2355% ( 157) 00:07:54.898 13308.849 - 13409.674: 49.8937% ( 156) 00:07:54.898 13409.674 - 13510.498: 51.2330% ( 126) 00:07:54.898 13510.498 - 13611.323: 52.5723% ( 126) 00:07:54.898 13611.323 - 13712.148: 53.7096% ( 107) 00:07:54.898 13712.148 - 13812.972: 54.9532% ( 117) 00:07:54.898 13812.972 - 13913.797: 56.1331% ( 111) 00:07:54.898 13913.797 - 14014.622: 57.6849% ( 146) 00:07:54.898 14014.622 - 14115.446: 59.1730% ( 140) 00:07:54.898 14115.446 - 14216.271: 60.8099% ( 154) 00:07:54.898 14216.271 - 14317.095: 62.4256% ( 152) 00:07:54.898 14317.095 - 14417.920: 63.7649% ( 126) 00:07:54.898 14417.920 - 14518.745: 65.1998% ( 135) 00:07:54.898 14518.745 - 14619.569: 66.4541% ( 118) 00:07:54.898 14619.569 - 14720.394: 67.6658% ( 114) 00:07:54.898 14720.394 - 14821.218: 69.1008% ( 135) 00:07:54.898 14821.218 - 14922.043: 70.0255% ( 87) 00:07:54.898 14922.043 - 15022.868: 71.0778% ( 99) 00:07:54.898 15022.868 - 15123.692: 72.0770% ( 94) 00:07:54.898 15123.692 - 15224.517: 73.1080% ( 97) 00:07:54.898 15224.517 - 15325.342: 74.2985% ( 112) 00:07:54.898 15325.342 - 15426.166: 75.3720% ( 101) 00:07:54.898 15426.166 - 15526.991: 76.4881% ( 105) 00:07:54.898 15526.991 - 15627.815: 77.5510% ( 100) 00:07:54.898 15627.815 - 15728.640: 78.5927% ( 98) 00:07:54.898 15728.640 - 15829.465: 79.9851% ( 131) 00:07:54.898 15829.465 - 15930.289: 81.0162% ( 97) 00:07:54.898 15930.289 - 16031.114: 81.8984% ( 83) 00:07:54.898 16031.114 - 16131.938: 82.7381% ( 79) 00:07:54.898 16131.938 - 16232.763: 83.5884% ( 80) 00:07:54.898 16232.763 - 16333.588: 84.2900% ( 66) 00:07:54.898 16333.588 - 16434.412: 85.0765% ( 74) 00:07:54.898 16434.412 - 16535.237: 85.8206% ( 70) 00:07:54.898 16535.237 - 16636.062: 86.4371% ( 58) 00:07:54.898 16636.062 - 16736.886: 87.2130% ( 73) 00:07:54.898 16736.886 - 16837.711: 87.9571% ( 70) 00:07:54.898 16837.711 - 16938.535: 88.7968% ( 79) 00:07:54.898 16938.535 - 17039.360: 89.6259% ( 78) 00:07:54.898 17039.360 - 17140.185: 90.7419% ( 105) 00:07:54.898 17140.185 - 17241.009: 91.2628% ( 49) 00:07:54.898 17241.009 - 17341.834: 91.7304% ( 44) 00:07:54.898 17341.834 - 17442.658: 92.1875% ( 43) 00:07:54.898 17442.658 - 17543.483: 92.5489% ( 34) 00:07:54.898 17543.483 - 17644.308: 92.8040% ( 24) 00:07:54.898 17644.308 - 17745.132: 93.0166% ( 20) 00:07:54.898 17745.132 - 17845.957: 93.1441% ( 12) 00:07:54.898 17845.957 - 17946.782: 93.2292% ( 8) 00:07:54.898 17946.782 - 18047.606: 93.3355% ( 10) 00:07:54.898 18047.606 - 18148.431: 93.5055% ( 16) 00:07:54.898 18148.431 - 18249.255: 93.6756% ( 16) 00:07:54.898 18249.255 - 18350.080: 93.8669% ( 18) 00:07:54.898 18350.080 - 18450.905: 94.2071% ( 32) 00:07:54.898 18450.905 - 18551.729: 94.4090% ( 19) 00:07:54.898 18551.729 - 18652.554: 94.6216% ( 20) 00:07:54.898 18652.554 - 18753.378: 94.8129% ( 18) 00:07:54.898 18753.378 - 18854.203: 94.9511% ( 13) 00:07:54.898 18854.203 - 18955.028: 95.0361% ( 8) 00:07:54.898 18955.028 - 19055.852: 95.1212% ( 8) 00:07:54.898 19055.852 - 19156.677: 95.1956% ( 7) 00:07:54.898 19156.677 - 19257.502: 95.2594% ( 6) 00:07:54.898 19257.502 - 19358.326: 95.3550% ( 9) 00:07:54.898 19358.326 - 19459.151: 95.4719% ( 11) 00:07:54.898 19459.151 - 19559.975: 95.7058% ( 22) 00:07:54.898 19559.975 - 19660.800: 95.9290% ( 21) 00:07:54.898 19660.800 - 19761.625: 96.2372% ( 29) 00:07:54.898 19761.625 - 19862.449: 96.4923% ( 24) 00:07:54.898 19862.449 - 19963.274: 96.7368% ( 23) 00:07:54.898 19963.274 - 20064.098: 97.0238% ( 27) 00:07:54.898 20064.098 - 20164.923: 97.2895% ( 25) 00:07:54.898 20164.923 - 20265.748: 97.5765% ( 27) 00:07:54.898 20265.748 - 20366.572: 97.8529% ( 26) 00:07:54.898 20366.572 - 20467.397: 98.0230% ( 16) 00:07:54.898 20467.397 - 20568.222: 98.1399% ( 11) 00:07:54.898 20568.222 - 20669.046: 98.2462% ( 10) 00:07:54.898 20669.046 - 20769.871: 98.3312% ( 8) 00:07:54.898 20769.871 - 20870.695: 98.3737% ( 4) 00:07:54.898 20870.695 - 20971.520: 98.4269% ( 5) 00:07:54.898 20971.520 - 21072.345: 98.4588% ( 3) 00:07:54.898 21072.345 - 21173.169: 98.5332% ( 7) 00:07:54.898 21173.169 - 21273.994: 98.5651% ( 3) 00:07:54.898 21273.994 - 21374.818: 98.5863% ( 2) 00:07:54.898 21374.818 - 21475.643: 98.6288% ( 4) 00:07:54.898 21475.643 - 21576.468: 98.7032% ( 7) 00:07:54.898 21576.468 - 21677.292: 98.7989% ( 9) 00:07:54.898 21677.292 - 21778.117: 98.8946% ( 9) 00:07:54.898 21778.117 - 21878.942: 98.9583% ( 6) 00:07:54.898 21878.942 - 21979.766: 99.0115% ( 5) 00:07:54.898 21979.766 - 22080.591: 99.0753% ( 6) 00:07:54.898 22080.591 - 22181.415: 99.1390% ( 6) 00:07:54.898 22181.415 - 22282.240: 99.2134% ( 7) 00:07:54.898 22282.240 - 22383.065: 99.2772% ( 6) 00:07:54.898 22383.065 - 22483.889: 99.3197% ( 4) 00:07:54.898 22786.363 - 22887.188: 99.3410% ( 2) 00:07:54.898 22887.188 - 22988.012: 99.3835% ( 4) 00:07:54.898 22988.012 - 23088.837: 99.4260% ( 4) 00:07:54.898 23088.837 - 23189.662: 99.4685% ( 4) 00:07:54.898 23189.662 - 23290.486: 99.5111% ( 4) 00:07:54.898 23290.486 - 23391.311: 99.5536% ( 4) 00:07:54.898 23391.311 - 23492.135: 99.5961% ( 4) 00:07:54.898 23492.135 - 23592.960: 99.6386% ( 4) 00:07:54.898 23592.960 - 23693.785: 99.6918% ( 5) 00:07:54.898 23693.785 - 23794.609: 99.7343% ( 4) 00:07:54.898 23794.609 - 23895.434: 99.7768% ( 4) 00:07:54.898 23895.434 - 23996.258: 99.8193% ( 4) 00:07:54.898 23996.258 - 24097.083: 99.8724% ( 5) 00:07:54.898 24097.083 - 24197.908: 99.9150% ( 4) 00:07:54.898 24197.908 - 24298.732: 99.9575% ( 4) 00:07:54.898 24298.732 - 24399.557: 100.0000% ( 4) 00:07:54.898 00:07:54.898 17:38:18 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:54.898 00:07:54.898 real 0m2.575s 00:07:54.898 user 0m2.239s 00:07:54.898 sys 0m0.217s 00:07:54.898 17:38:18 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.898 17:38:18 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.898 ************************************ 00:07:54.898 END TEST nvme_perf 00:07:54.898 ************************************ 00:07:54.898 17:38:18 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:54.898 17:38:18 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:54.898 17:38:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.899 17:38:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.899 ************************************ 00:07:54.899 START TEST nvme_hello_world 00:07:54.899 ************************************ 00:07:54.899 17:38:18 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:55.158 Initializing NVMe Controllers 00:07:55.158 Attached to 0000:00:10.0 00:07:55.158 Namespace ID: 1 size: 6GB 00:07:55.158 Attached to 0000:00:11.0 00:07:55.158 Namespace ID: 1 size: 5GB 00:07:55.158 Attached to 0000:00:13.0 00:07:55.158 Namespace ID: 1 size: 1GB 00:07:55.158 Attached to 0000:00:12.0 00:07:55.158 Namespace ID: 1 size: 4GB 00:07:55.158 Namespace ID: 2 size: 4GB 00:07:55.158 Namespace ID: 3 size: 4GB 00:07:55.158 Initialization complete. 00:07:55.158 INFO: using host memory buffer for IO 00:07:55.158 Hello world! 00:07:55.158 INFO: using host memory buffer for IO 00:07:55.158 Hello world! 00:07:55.158 INFO: using host memory buffer for IO 00:07:55.158 Hello world! 00:07:55.158 INFO: using host memory buffer for IO 00:07:55.158 Hello world! 00:07:55.158 INFO: using host memory buffer for IO 00:07:55.158 Hello world! 00:07:55.158 INFO: using host memory buffer for IO 00:07:55.158 Hello world! 00:07:55.158 00:07:55.158 real 0m0.258s 00:07:55.158 user 0m0.112s 00:07:55.158 sys 0m0.105s 00:07:55.158 17:38:18 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.158 ************************************ 00:07:55.158 END TEST nvme_hello_world 00:07:55.158 17:38:18 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:55.158 ************************************ 00:07:55.158 17:38:18 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:55.159 17:38:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.159 17:38:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.159 17:38:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.159 ************************************ 00:07:55.159 START TEST nvme_sgl 00:07:55.159 ************************************ 00:07:55.159 17:38:18 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:55.417 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:55.417 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:55.417 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:55.417 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:55.417 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:55.417 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:55.417 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:55.417 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:55.417 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:55.417 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:55.417 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:55.417 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:55.417 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:55.417 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:55.417 NVMe Readv/Writev Request test 00:07:55.417 Attached to 0000:00:10.0 00:07:55.417 Attached to 0000:00:11.0 00:07:55.417 Attached to 0000:00:13.0 00:07:55.417 Attached to 0000:00:12.0 00:07:55.417 0000:00:10.0: build_io_request_2 test passed 00:07:55.417 0000:00:10.0: build_io_request_4 test passed 00:07:55.417 0000:00:10.0: build_io_request_5 test passed 00:07:55.417 0000:00:10.0: build_io_request_6 test passed 00:07:55.417 0000:00:10.0: build_io_request_7 test passed 00:07:55.417 0000:00:10.0: build_io_request_10 test passed 00:07:55.417 0000:00:11.0: build_io_request_2 test passed 00:07:55.417 0000:00:11.0: build_io_request_4 test passed 00:07:55.417 0000:00:11.0: build_io_request_5 test passed 00:07:55.417 0000:00:11.0: build_io_request_6 test passed 00:07:55.417 0000:00:11.0: build_io_request_7 test passed 00:07:55.417 0000:00:11.0: build_io_request_10 test passed 00:07:55.417 Cleaning up... 00:07:55.678 00:07:55.678 real 0m0.378s 00:07:55.678 user 0m0.219s 00:07:55.678 sys 0m0.113s 00:07:55.678 17:38:18 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.678 ************************************ 00:07:55.678 END TEST nvme_sgl 00:07:55.678 ************************************ 00:07:55.678 17:38:18 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:55.678 17:38:19 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:55.678 17:38:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.678 17:38:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.678 17:38:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.678 ************************************ 00:07:55.678 START TEST nvme_e2edp 00:07:55.678 ************************************ 00:07:55.678 17:38:19 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:55.951 NVMe Write/Read with End-to-End data protection test 00:07:55.951 Attached to 0000:00:10.0 00:07:55.951 Attached to 0000:00:11.0 00:07:55.951 Attached to 0000:00:13.0 00:07:55.951 Attached to 0000:00:12.0 00:07:55.951 Cleaning up... 00:07:55.951 00:07:55.951 real 0m0.231s 00:07:55.951 user 0m0.082s 00:07:55.951 sys 0m0.093s 00:07:55.951 17:38:19 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.951 ************************************ 00:07:55.951 END TEST nvme_e2edp 00:07:55.951 ************************************ 00:07:55.951 17:38:19 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:55.951 17:38:19 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:55.951 17:38:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.951 17:38:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.951 17:38:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.951 ************************************ 00:07:55.951 START TEST nvme_reserve 00:07:55.951 ************************************ 00:07:55.951 17:38:19 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:56.218 ===================================================== 00:07:56.218 NVMe Controller at PCI bus 0, device 16, function 0 00:07:56.218 ===================================================== 00:07:56.218 Reservations: Not Supported 00:07:56.218 ===================================================== 00:07:56.218 NVMe Controller at PCI bus 0, device 17, function 0 00:07:56.218 ===================================================== 00:07:56.218 Reservations: Not Supported 00:07:56.218 ===================================================== 00:07:56.218 NVMe Controller at PCI bus 0, device 19, function 0 00:07:56.218 ===================================================== 00:07:56.218 Reservations: Not Supported 00:07:56.218 ===================================================== 00:07:56.218 NVMe Controller at PCI bus 0, device 18, function 0 00:07:56.218 ===================================================== 00:07:56.218 Reservations: Not Supported 00:07:56.218 Reservation test passed 00:07:56.218 00:07:56.218 real 0m0.229s 00:07:56.218 user 0m0.077s 00:07:56.218 sys 0m0.103s 00:07:56.218 17:38:19 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.218 ************************************ 00:07:56.218 END TEST nvme_reserve 00:07:56.218 ************************************ 00:07:56.218 17:38:19 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:56.218 17:38:19 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:56.218 17:38:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.218 17:38:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.218 17:38:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.218 ************************************ 00:07:56.218 START TEST nvme_err_injection 00:07:56.218 ************************************ 00:07:56.218 17:38:19 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:56.477 NVMe Error Injection test 00:07:56.477 Attached to 0000:00:10.0 00:07:56.477 Attached to 0000:00:11.0 00:07:56.477 Attached to 0000:00:13.0 00:07:56.477 Attached to 0000:00:12.0 00:07:56.477 0000:00:10.0: get features failed as expected 00:07:56.477 0000:00:11.0: get features failed as expected 00:07:56.477 0000:00:13.0: get features failed as expected 00:07:56.477 0000:00:12.0: get features failed as expected 00:07:56.477 0000:00:10.0: get features successfully as expected 00:07:56.477 0000:00:11.0: get features successfully as expected 00:07:56.477 0000:00:13.0: get features successfully as expected 00:07:56.477 0000:00:12.0: get features successfully as expected 00:07:56.477 0000:00:10.0: read failed as expected 00:07:56.477 0000:00:11.0: read failed as expected 00:07:56.477 0000:00:13.0: read failed as expected 00:07:56.477 0000:00:12.0: read failed as expected 00:07:56.477 0000:00:10.0: read successfully as expected 00:07:56.477 0000:00:11.0: read successfully as expected 00:07:56.477 0000:00:13.0: read successfully as expected 00:07:56.477 0000:00:12.0: read successfully as expected 00:07:56.477 Cleaning up... 00:07:56.477 00:07:56.477 real 0m0.243s 00:07:56.477 user 0m0.095s 00:07:56.477 sys 0m0.106s 00:07:56.477 17:38:19 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.477 17:38:19 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:56.477 ************************************ 00:07:56.477 END TEST nvme_err_injection 00:07:56.477 ************************************ 00:07:56.477 17:38:19 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:56.477 17:38:19 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:56.477 17:38:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.477 17:38:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.477 ************************************ 00:07:56.477 START TEST nvme_overhead 00:07:56.477 ************************************ 00:07:56.477 17:38:19 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:57.861 Initializing NVMe Controllers 00:07:57.861 Attached to 0000:00:10.0 00:07:57.861 Attached to 0000:00:11.0 00:07:57.861 Attached to 0000:00:13.0 00:07:57.861 Attached to 0000:00:12.0 00:07:57.861 Initialization complete. Launching workers. 00:07:57.861 submit (in ns) avg, min, max = 12458.1, 11344.6, 522132.3 00:07:57.861 complete (in ns) avg, min, max = 8375.6, 7524.6, 410386.2 00:07:57.861 00:07:57.861 Submit histogram 00:07:57.861 ================ 00:07:57.861 Range in us Cumulative Count 00:07:57.861 11.323 - 11.372: 0.0081% ( 1) 00:07:57.861 11.520 - 11.569: 0.0162% ( 1) 00:07:57.861 11.569 - 11.618: 0.0243% ( 1) 00:07:57.861 11.618 - 11.668: 0.0323% ( 1) 00:07:57.861 11.668 - 11.717: 0.0808% ( 6) 00:07:57.861 11.717 - 11.766: 0.1859% ( 13) 00:07:57.861 11.766 - 11.815: 0.5093% ( 40) 00:07:57.861 11.815 - 11.865: 1.9885% ( 183) 00:07:57.861 11.865 - 11.914: 5.5937% ( 446) 00:07:57.861 11.914 - 11.963: 11.9150% ( 782) 00:07:57.861 11.963 - 12.012: 19.9418% ( 993) 00:07:57.861 12.012 - 12.062: 29.2862% ( 1156) 00:07:57.861 12.062 - 12.111: 39.3824% ( 1249) 00:07:57.861 12.111 - 12.160: 49.5595% ( 1259) 00:07:57.861 12.160 - 12.209: 58.6533% ( 1125) 00:07:57.861 12.209 - 12.258: 66.0092% ( 910) 00:07:57.861 12.258 - 12.308: 72.2981% ( 778) 00:07:57.861 12.308 - 12.357: 77.0188% ( 584) 00:07:57.861 12.357 - 12.406: 80.8342% ( 472) 00:07:57.861 12.406 - 12.455: 83.9221% ( 382) 00:07:57.861 12.455 - 12.505: 87.0746% ( 390) 00:07:57.861 12.505 - 12.554: 89.1763% ( 260) 00:07:57.861 12.554 - 12.603: 91.0759% ( 235) 00:07:57.861 12.603 - 12.702: 93.6626% ( 320) 00:07:57.861 12.702 - 12.800: 95.0044% ( 166) 00:07:57.861 12.800 - 12.898: 95.7320% ( 90) 00:07:57.861 12.898 - 12.997: 96.0715% ( 42) 00:07:57.861 12.997 - 13.095: 96.2493% ( 22) 00:07:57.861 13.095 - 13.194: 96.4514% ( 25) 00:07:57.861 13.194 - 13.292: 96.5241% ( 9) 00:07:57.861 13.292 - 13.391: 96.5888% ( 8) 00:07:57.861 13.391 - 13.489: 96.6535% ( 8) 00:07:57.861 13.489 - 13.588: 96.6615% ( 1) 00:07:57.861 13.588 - 13.686: 96.6858% ( 3) 00:07:57.861 13.686 - 13.785: 96.7020% ( 2) 00:07:57.861 13.785 - 13.883: 96.7100% ( 1) 00:07:57.861 13.883 - 13.982: 96.7343% ( 3) 00:07:57.861 13.982 - 14.080: 96.7909% ( 7) 00:07:57.861 14.080 - 14.178: 96.8798% ( 11) 00:07:57.861 14.178 - 14.277: 97.0011% ( 15) 00:07:57.861 14.277 - 14.375: 97.1142% ( 14) 00:07:57.861 14.375 - 14.474: 97.2678% ( 19) 00:07:57.861 14.474 - 14.572: 97.4699% ( 25) 00:07:57.861 14.572 - 14.671: 97.5669% ( 12) 00:07:57.861 14.671 - 14.769: 97.6477% ( 10) 00:07:57.861 14.769 - 14.868: 97.7043% ( 7) 00:07:57.861 14.868 - 14.966: 97.7932% ( 11) 00:07:57.861 14.966 - 15.065: 97.8336% ( 5) 00:07:57.861 15.065 - 15.163: 97.8579% ( 3) 00:07:57.861 15.163 - 15.262: 97.9145% ( 7) 00:07:57.861 15.262 - 15.360: 97.9226% ( 1) 00:07:57.861 15.360 - 15.458: 97.9306% ( 1) 00:07:57.861 15.458 - 15.557: 97.9387% ( 1) 00:07:57.861 15.557 - 15.655: 97.9711% ( 4) 00:07:57.861 15.655 - 15.754: 97.9872% ( 2) 00:07:57.861 15.754 - 15.852: 98.0034% ( 2) 00:07:57.861 15.852 - 15.951: 98.0276% ( 3) 00:07:57.861 16.049 - 16.148: 98.0438% ( 2) 00:07:57.861 16.148 - 16.246: 98.0519% ( 1) 00:07:57.861 16.345 - 16.443: 98.0681% ( 2) 00:07:57.861 16.443 - 16.542: 98.0842% ( 2) 00:07:57.862 16.542 - 16.640: 98.1085% ( 3) 00:07:57.862 16.935 - 17.034: 98.1246% ( 2) 00:07:57.862 17.034 - 17.132: 98.1327% ( 1) 00:07:57.862 17.132 - 17.231: 98.1408% ( 1) 00:07:57.862 17.231 - 17.329: 98.1731% ( 4) 00:07:57.862 17.329 - 17.428: 98.1893% ( 2) 00:07:57.862 17.428 - 17.526: 98.2055% ( 2) 00:07:57.862 17.526 - 17.625: 98.2378% ( 4) 00:07:57.862 17.625 - 17.723: 98.2459% ( 1) 00:07:57.862 17.723 - 17.822: 98.2621% ( 2) 00:07:57.862 17.822 - 17.920: 98.3106% ( 6) 00:07:57.862 17.920 - 18.018: 98.3429% ( 4) 00:07:57.862 18.018 - 18.117: 98.3833% ( 5) 00:07:57.862 18.117 - 18.215: 98.4480% ( 8) 00:07:57.862 18.215 - 18.314: 98.4965% ( 6) 00:07:57.862 18.314 - 18.412: 98.5612% ( 8) 00:07:57.862 18.412 - 18.511: 98.6905% ( 16) 00:07:57.862 18.511 - 18.609: 98.7956% ( 13) 00:07:57.862 18.609 - 18.708: 98.8926% ( 12) 00:07:57.862 18.708 - 18.806: 98.9653% ( 9) 00:07:57.862 18.806 - 18.905: 99.0138% ( 6) 00:07:57.862 18.905 - 19.003: 99.0866% ( 9) 00:07:57.862 19.003 - 19.102: 99.1512% ( 8) 00:07:57.862 19.102 - 19.200: 99.1917% ( 5) 00:07:57.862 19.200 - 19.298: 99.2482% ( 7) 00:07:57.862 19.298 - 19.397: 99.2806% ( 4) 00:07:57.862 19.397 - 19.495: 99.3533% ( 9) 00:07:57.862 19.495 - 19.594: 99.4018% ( 6) 00:07:57.862 19.594 - 19.692: 99.4180% ( 2) 00:07:57.862 19.791 - 19.889: 99.4342% ( 2) 00:07:57.862 19.889 - 19.988: 99.4665% ( 4) 00:07:57.862 19.988 - 20.086: 99.5150% ( 6) 00:07:57.862 20.086 - 20.185: 99.5554% ( 5) 00:07:57.862 20.185 - 20.283: 99.5797% ( 3) 00:07:57.862 20.283 - 20.382: 99.6039% ( 3) 00:07:57.862 20.382 - 20.480: 99.6443% ( 5) 00:07:57.862 20.480 - 20.578: 99.6605% ( 2) 00:07:57.862 20.677 - 20.775: 99.6686% ( 1) 00:07:57.862 20.775 - 20.874: 99.6767% ( 1) 00:07:57.862 21.071 - 21.169: 99.6847% ( 1) 00:07:57.862 21.563 - 21.662: 99.6928% ( 1) 00:07:57.862 21.662 - 21.760: 99.7009% ( 1) 00:07:57.862 21.858 - 21.957: 99.7090% ( 1) 00:07:57.862 22.055 - 22.154: 99.7171% ( 1) 00:07:57.862 22.449 - 22.548: 99.7252% ( 1) 00:07:57.862 22.646 - 22.745: 99.7332% ( 1) 00:07:57.862 22.745 - 22.843: 99.7413% ( 1) 00:07:57.862 22.942 - 23.040: 99.7494% ( 1) 00:07:57.862 23.040 - 23.138: 99.7656% ( 2) 00:07:57.862 23.434 - 23.532: 99.7817% ( 2) 00:07:57.862 23.532 - 23.631: 99.7979% ( 2) 00:07:57.862 24.123 - 24.222: 99.8060% ( 1) 00:07:57.862 24.615 - 24.714: 99.8141% ( 1) 00:07:57.862 24.812 - 24.911: 99.8222% ( 1) 00:07:57.862 25.009 - 25.108: 99.8302% ( 1) 00:07:57.862 25.206 - 25.403: 99.8383% ( 1) 00:07:57.862 25.403 - 25.600: 99.8545% ( 2) 00:07:57.862 25.600 - 25.797: 99.8626% ( 1) 00:07:57.862 26.782 - 26.978: 99.8707% ( 1) 00:07:57.862 27.175 - 27.372: 99.8787% ( 1) 00:07:57.862 27.963 - 28.160: 99.8868% ( 1) 00:07:57.862 29.735 - 29.932: 99.8949% ( 1) 00:07:57.862 30.326 - 30.523: 99.9030% ( 1) 00:07:57.862 31.902 - 32.098: 99.9111% ( 1) 00:07:57.862 37.218 - 37.415: 99.9192% ( 1) 00:07:57.862 38.006 - 38.203: 99.9353% ( 2) 00:07:57.862 51.988 - 52.382: 99.9434% ( 1) 00:07:57.862 52.382 - 52.775: 99.9515% ( 1) 00:07:57.862 70.892 - 71.286: 99.9596% ( 1) 00:07:57.862 83.102 - 83.495: 99.9757% ( 2) 00:07:57.862 92.160 - 92.554: 99.9838% ( 1) 00:07:57.862 215.828 - 217.403: 99.9919% ( 1) 00:07:57.862 519.877 - 523.028: 100.0000% ( 1) 00:07:57.862 00:07:57.862 Complete histogram 00:07:57.862 ================== 00:07:57.862 Range in us Cumulative Count 00:07:57.862 7.483 - 7.532: 0.0081% ( 1) 00:07:57.862 7.680 - 7.729: 0.0162% ( 1) 00:07:57.862 7.778 - 7.828: 0.0566% ( 5) 00:07:57.862 7.828 - 7.877: 0.3718% ( 39) 00:07:57.862 7.877 - 7.926: 2.1502% ( 220) 00:07:57.862 7.926 - 7.975: 8.0430% ( 729) 00:07:57.862 7.975 - 8.025: 20.0469% ( 1485) 00:07:57.862 8.025 - 8.074: 35.3488% ( 1893) 00:07:57.862 8.074 - 8.123: 49.6484% ( 1769) 00:07:57.862 8.123 - 8.172: 61.4987% ( 1466) 00:07:57.862 8.172 - 8.222: 70.5844% ( 1124) 00:07:57.862 8.222 - 8.271: 77.2533% ( 825) 00:07:57.862 8.271 - 8.320: 82.5156% ( 651) 00:07:57.862 8.320 - 8.369: 86.5088% ( 494) 00:07:57.862 8.369 - 8.418: 89.3865% ( 356) 00:07:57.862 8.418 - 8.468: 91.4882% ( 260) 00:07:57.862 8.468 - 8.517: 92.9513% ( 181) 00:07:57.862 8.517 - 8.566: 94.1314% ( 146) 00:07:57.862 8.566 - 8.615: 95.0691% ( 116) 00:07:57.862 8.615 - 8.665: 95.8290% ( 94) 00:07:57.862 8.665 - 8.714: 96.3544% ( 65) 00:07:57.862 8.714 - 8.763: 96.7828% ( 53) 00:07:57.862 8.763 - 8.812: 97.1546% ( 46) 00:07:57.862 8.812 - 8.862: 97.3082% ( 19) 00:07:57.862 8.862 - 8.911: 97.5103% ( 25) 00:07:57.862 8.911 - 8.960: 97.5992% ( 11) 00:07:57.862 8.960 - 9.009: 97.7447% ( 18) 00:07:57.862 9.009 - 9.058: 97.8013% ( 7) 00:07:57.862 9.058 - 9.108: 97.8821% ( 10) 00:07:57.862 9.108 - 9.157: 97.9468% ( 8) 00:07:57.862 9.157 - 9.206: 98.0115% ( 8) 00:07:57.862 9.206 - 9.255: 98.0276% ( 2) 00:07:57.862 9.255 - 9.305: 98.0681% ( 5) 00:07:57.862 9.305 - 9.354: 98.0842% ( 2) 00:07:57.862 9.354 - 9.403: 98.1246% ( 5) 00:07:57.862 9.403 - 9.452: 98.1570% ( 4) 00:07:57.862 9.452 - 9.502: 98.1651% ( 1) 00:07:57.862 9.551 - 9.600: 98.1812% ( 2) 00:07:57.862 9.600 - 9.649: 98.1893% ( 1) 00:07:57.862 9.649 - 9.698: 98.2055% ( 2) 00:07:57.862 9.698 - 9.748: 98.2297% ( 3) 00:07:57.862 9.748 - 9.797: 98.2459% ( 2) 00:07:57.862 9.846 - 9.895: 98.2621% ( 2) 00:07:57.862 9.945 - 9.994: 98.2701% ( 1) 00:07:57.862 9.994 - 10.043: 98.2863% ( 2) 00:07:57.862 10.092 - 10.142: 98.3025% ( 2) 00:07:57.862 10.142 - 10.191: 98.3186% ( 2) 00:07:57.862 10.240 - 10.289: 98.3267% ( 1) 00:07:57.862 10.289 - 10.338: 98.3348% ( 1) 00:07:57.862 10.388 - 10.437: 98.3429% ( 1) 00:07:57.862 10.437 - 10.486: 98.3510% ( 1) 00:07:57.862 10.585 - 10.634: 98.3591% ( 1) 00:07:57.862 10.634 - 10.683: 98.3671% ( 1) 00:07:57.862 10.782 - 10.831: 98.3752% ( 1) 00:07:57.862 11.274 - 11.323: 98.3833% ( 1) 00:07:57.862 11.372 - 11.422: 98.3914% ( 1) 00:07:57.862 11.914 - 11.963: 98.3995% ( 1) 00:07:57.862 11.963 - 12.012: 98.4076% ( 1) 00:07:57.862 12.111 - 12.160: 98.4156% ( 1) 00:07:57.862 12.160 - 12.209: 98.4237% ( 1) 00:07:57.862 12.258 - 12.308: 98.4318% ( 1) 00:07:57.862 12.308 - 12.357: 98.4399% ( 1) 00:07:57.862 12.702 - 12.800: 98.4480% ( 1) 00:07:57.863 12.800 - 12.898: 98.4642% ( 2) 00:07:57.863 12.898 - 12.997: 98.4803% ( 2) 00:07:57.863 12.997 - 13.095: 98.4884% ( 1) 00:07:57.863 13.194 - 13.292: 98.4965% ( 1) 00:07:57.863 13.292 - 13.391: 98.5127% ( 2) 00:07:57.863 13.686 - 13.785: 98.5207% ( 1) 00:07:57.863 13.785 - 13.883: 98.5450% ( 3) 00:07:57.863 13.883 - 13.982: 98.5692% ( 3) 00:07:57.863 13.982 - 14.080: 98.6016% ( 4) 00:07:57.863 14.080 - 14.178: 98.6177% ( 2) 00:07:57.863 14.178 - 14.277: 98.6905% ( 9) 00:07:57.863 14.277 - 14.375: 98.7956% ( 13) 00:07:57.863 14.375 - 14.474: 98.8117% ( 2) 00:07:57.863 14.474 - 14.572: 98.8764% ( 8) 00:07:57.863 14.572 - 14.671: 98.9087% ( 4) 00:07:57.863 14.671 - 14.769: 98.9734% ( 8) 00:07:57.863 14.769 - 14.868: 99.0138% ( 5) 00:07:57.863 14.868 - 14.966: 99.1027% ( 11) 00:07:57.863 14.966 - 15.065: 99.1674% ( 8) 00:07:57.863 15.065 - 15.163: 99.2159% ( 6) 00:07:57.863 15.163 - 15.262: 99.2806% ( 8) 00:07:57.863 15.262 - 15.360: 99.3614% ( 10) 00:07:57.863 15.360 - 15.458: 99.3857% ( 3) 00:07:57.863 15.458 - 15.557: 99.4018% ( 2) 00:07:57.863 15.557 - 15.655: 99.4503% ( 6) 00:07:57.863 15.655 - 15.754: 99.5069% ( 7) 00:07:57.863 15.754 - 15.852: 99.5473% ( 5) 00:07:57.863 15.852 - 15.951: 99.5716% ( 3) 00:07:57.863 15.951 - 16.049: 99.5958% ( 3) 00:07:57.863 16.049 - 16.148: 99.6039% ( 1) 00:07:57.863 16.148 - 16.246: 99.6120% ( 1) 00:07:57.863 16.246 - 16.345: 99.6282% ( 2) 00:07:57.863 16.345 - 16.443: 99.6443% ( 2) 00:07:57.863 16.443 - 16.542: 99.6605% ( 2) 00:07:57.863 16.640 - 16.738: 99.6767% ( 2) 00:07:57.863 16.738 - 16.837: 99.6847% ( 1) 00:07:57.863 16.837 - 16.935: 99.6928% ( 1) 00:07:57.863 16.935 - 17.034: 99.7009% ( 1) 00:07:57.863 17.034 - 17.132: 99.7090% ( 1) 00:07:57.863 17.132 - 17.231: 99.7171% ( 1) 00:07:57.863 17.723 - 17.822: 99.7252% ( 1) 00:07:57.863 18.018 - 18.117: 99.7332% ( 1) 00:07:57.863 18.314 - 18.412: 99.7413% ( 1) 00:07:57.863 18.609 - 18.708: 99.7494% ( 1) 00:07:57.863 18.708 - 18.806: 99.7656% ( 2) 00:07:57.863 19.003 - 19.102: 99.7737% ( 1) 00:07:57.863 19.200 - 19.298: 99.7817% ( 1) 00:07:57.863 19.298 - 19.397: 99.7898% ( 1) 00:07:57.863 19.594 - 19.692: 99.7979% ( 1) 00:07:57.863 19.692 - 19.791: 99.8141% ( 2) 00:07:57.863 19.791 - 19.889: 99.8222% ( 1) 00:07:57.863 19.988 - 20.086: 99.8302% ( 1) 00:07:57.863 20.086 - 20.185: 99.8383% ( 1) 00:07:57.863 20.480 - 20.578: 99.8464% ( 1) 00:07:57.863 20.677 - 20.775: 99.8545% ( 1) 00:07:57.863 20.874 - 20.972: 99.8626% ( 1) 00:07:57.863 21.071 - 21.169: 99.8707% ( 1) 00:07:57.863 21.169 - 21.268: 99.8787% ( 1) 00:07:57.863 21.268 - 21.366: 99.8868% ( 1) 00:07:57.863 22.351 - 22.449: 99.8949% ( 1) 00:07:57.863 23.237 - 23.335: 99.9030% ( 1) 00:07:57.863 23.335 - 23.434: 99.9111% ( 1) 00:07:57.863 23.434 - 23.532: 99.9192% ( 1) 00:07:57.863 24.222 - 24.320: 99.9272% ( 1) 00:07:57.863 25.206 - 25.403: 99.9353% ( 1) 00:07:57.863 25.994 - 26.191: 99.9434% ( 1) 00:07:57.863 28.160 - 28.357: 99.9515% ( 1) 00:07:57.863 42.929 - 43.126: 99.9596% ( 1) 00:07:57.863 75.225 - 75.618: 99.9677% ( 1) 00:07:57.863 109.489 - 110.277: 99.9757% ( 1) 00:07:57.863 115.791 - 116.578: 99.9838% ( 1) 00:07:57.863 376.517 - 378.092: 99.9919% ( 1) 00:07:57.863 409.600 - 412.751: 100.0000% ( 1) 00:07:57.863 00:07:57.863 00:07:57.863 real 0m1.238s 00:07:57.863 user 0m1.075s 00:07:57.863 sys 0m0.110s 00:07:57.863 17:38:21 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.863 ************************************ 00:07:57.863 END TEST nvme_overhead 00:07:57.863 ************************************ 00:07:57.863 17:38:21 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:57.863 17:38:21 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:57.863 17:38:21 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:57.863 17:38:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.863 17:38:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.863 ************************************ 00:07:57.863 START TEST nvme_arbitration 00:07:57.863 ************************************ 00:07:57.863 17:38:21 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:01.157 Initializing NVMe Controllers 00:08:01.157 Attached to 0000:00:10.0 00:08:01.157 Attached to 0000:00:11.0 00:08:01.157 Attached to 0000:00:13.0 00:08:01.157 Attached to 0000:00:12.0 00:08:01.157 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:01.157 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:01.157 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:01.157 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:01.157 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:01.157 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:01.157 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:01.157 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:01.157 Initialization complete. Launching workers. 00:08:01.157 Starting thread on core 1 with urgent priority queue 00:08:01.157 Starting thread on core 2 with urgent priority queue 00:08:01.157 Starting thread on core 3 with urgent priority queue 00:08:01.157 Starting thread on core 0 with urgent priority queue 00:08:01.157 QEMU NVMe Ctrl (12340 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:08:01.157 QEMU NVMe Ctrl (12342 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:08:01.157 QEMU NVMe Ctrl (12341 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:08:01.157 QEMU NVMe Ctrl (12342 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:08:01.157 QEMU NVMe Ctrl (12343 ) core 2: 853.33 IO/s 117.19 secs/100000 ios 00:08:01.157 QEMU NVMe Ctrl (12342 ) core 3: 832.00 IO/s 120.19 secs/100000 ios 00:08:01.157 ======================================================== 00:08:01.157 00:08:01.157 00:08:01.157 real 0m3.350s 00:08:01.157 user 0m9.285s 00:08:01.157 sys 0m0.122s 00:08:01.157 17:38:24 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.157 ************************************ 00:08:01.157 END TEST nvme_arbitration 00:08:01.157 ************************************ 00:08:01.157 17:38:24 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:01.157 17:38:24 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:01.157 17:38:24 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.157 17:38:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.157 17:38:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.157 ************************************ 00:08:01.157 START TEST nvme_single_aen 00:08:01.157 ************************************ 00:08:01.157 17:38:24 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:01.418 Asynchronous Event Request test 00:08:01.418 Attached to 0000:00:10.0 00:08:01.418 Attached to 0000:00:11.0 00:08:01.418 Attached to 0000:00:13.0 00:08:01.418 Attached to 0000:00:12.0 00:08:01.418 Reset controller to setup AER completions for this process 00:08:01.418 Registering asynchronous event callbacks... 00:08:01.418 Getting orig temperature thresholds of all controllers 00:08:01.418 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.418 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.418 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.418 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:01.418 Setting all controllers temperature threshold low to trigger AER 00:08:01.418 Waiting for all controllers temperature threshold to be set lower 00:08:01.418 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.418 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:01.418 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.418 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:01.418 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.418 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:01.418 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:01.418 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:01.418 Waiting for all controllers to trigger AER and reset threshold 00:08:01.418 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.418 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.418 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.418 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.418 Cleaning up... 00:08:01.418 00:08:01.418 real 0m0.234s 00:08:01.418 user 0m0.072s 00:08:01.418 sys 0m0.112s 00:08:01.418 17:38:24 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.418 ************************************ 00:08:01.418 17:38:24 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:01.418 END TEST nvme_single_aen 00:08:01.418 ************************************ 00:08:01.418 17:38:24 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:01.418 17:38:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.418 17:38:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.418 17:38:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.418 ************************************ 00:08:01.418 START TEST nvme_doorbell_aers 00:08:01.418 ************************************ 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:01.418 17:38:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:01.677 [2024-11-20 17:38:25.091811] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:11.685 Executing: test_write_invalid_db 00:08:11.685 Waiting for AER completion... 00:08:11.685 Failure: test_write_invalid_db 00:08:11.685 00:08:11.685 Executing: test_invalid_db_write_overflow_sq 00:08:11.685 Waiting for AER completion... 00:08:11.685 Failure: test_invalid_db_write_overflow_sq 00:08:11.685 00:08:11.685 Executing: test_invalid_db_write_overflow_cq 00:08:11.685 Waiting for AER completion... 00:08:11.685 Failure: test_invalid_db_write_overflow_cq 00:08:11.685 00:08:11.685 17:38:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:11.685 17:38:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:11.685 [2024-11-20 17:38:35.138159] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:21.662 Executing: test_write_invalid_db 00:08:21.662 Waiting for AER completion... 00:08:21.662 Failure: test_write_invalid_db 00:08:21.662 00:08:21.662 Executing: test_invalid_db_write_overflow_sq 00:08:21.662 Waiting for AER completion... 00:08:21.662 Failure: test_invalid_db_write_overflow_sq 00:08:21.662 00:08:21.662 Executing: test_invalid_db_write_overflow_cq 00:08:21.662 Waiting for AER completion... 00:08:21.662 Failure: test_invalid_db_write_overflow_cq 00:08:21.662 00:08:21.662 17:38:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:21.662 17:38:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:21.662 [2024-11-20 17:38:45.182743] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:31.749 Executing: test_write_invalid_db 00:08:31.749 Waiting for AER completion... 00:08:31.749 Failure: test_write_invalid_db 00:08:31.749 00:08:31.749 Executing: test_invalid_db_write_overflow_sq 00:08:31.749 Waiting for AER completion... 00:08:31.749 Failure: test_invalid_db_write_overflow_sq 00:08:31.749 00:08:31.749 Executing: test_invalid_db_write_overflow_cq 00:08:31.749 Waiting for AER completion... 00:08:31.749 Failure: test_invalid_db_write_overflow_cq 00:08:31.749 00:08:31.749 17:38:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:31.749 17:38:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:31.749 [2024-11-20 17:38:55.207224] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.714 Executing: test_write_invalid_db 00:08:41.714 Waiting for AER completion... 00:08:41.714 Failure: test_write_invalid_db 00:08:41.714 00:08:41.714 Executing: test_invalid_db_write_overflow_sq 00:08:41.714 Waiting for AER completion... 00:08:41.714 Failure: test_invalid_db_write_overflow_sq 00:08:41.714 00:08:41.714 Executing: test_invalid_db_write_overflow_cq 00:08:41.714 Waiting for AER completion... 00:08:41.714 Failure: test_invalid_db_write_overflow_cq 00:08:41.714 00:08:41.714 00:08:41.714 real 0m40.200s 00:08:41.714 user 0m34.132s 00:08:41.714 sys 0m5.657s 00:08:41.714 17:39:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.714 ************************************ 00:08:41.714 END TEST nvme_doorbell_aers 00:08:41.714 ************************************ 00:08:41.714 17:39:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:41.714 17:39:05 nvme -- nvme/nvme.sh@97 -- # uname 00:08:41.714 17:39:05 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:41.714 17:39:05 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:41.714 17:39:05 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:41.714 17:39:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.714 17:39:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.714 ************************************ 00:08:41.714 START TEST nvme_multi_aen 00:08:41.714 ************************************ 00:08:41.714 17:39:05 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:41.714 [2024-11-20 17:39:05.248961] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.714 [2024-11-20 17:39:05.249047] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.714 [2024-11-20 17:39:05.249058] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.714 [2024-11-20 17:39:05.250785] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.715 [2024-11-20 17:39:05.250831] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.715 [2024-11-20 17:39:05.250841] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.715 [2024-11-20 17:39:05.251756] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.715 [2024-11-20 17:39:05.251788] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.715 [2024-11-20 17:39:05.251796] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.715 [2024-11-20 17:39:05.252959] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.715 [2024-11-20 17:39:05.252990] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.715 [2024-11-20 17:39:05.252998] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63396) is not found. Dropping the request. 00:08:41.973 Child process pid: 63917 00:08:41.973 [Child] Asynchronous Event Request test 00:08:41.973 [Child] Attached to 0000:00:10.0 00:08:41.973 [Child] Attached to 0000:00:11.0 00:08:41.973 [Child] Attached to 0000:00:13.0 00:08:41.973 [Child] Attached to 0000:00:12.0 00:08:41.973 [Child] Registering asynchronous event callbacks... 00:08:41.973 [Child] Getting orig temperature thresholds of all controllers 00:08:41.973 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.973 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.973 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.973 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.973 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:41.973 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.973 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.973 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.973 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.973 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.973 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.973 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.973 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.973 [Child] Cleaning up... 00:08:41.973 Asynchronous Event Request test 00:08:41.973 Attached to 0000:00:10.0 00:08:41.973 Attached to 0000:00:11.0 00:08:41.973 Attached to 0000:00:13.0 00:08:41.973 Attached to 0000:00:12.0 00:08:41.973 Reset controller to setup AER completions for this process 00:08:41.973 Registering asynchronous event callbacks... 00:08:41.973 Getting orig temperature thresholds of all controllers 00:08:41.973 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.973 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.973 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.973 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.973 Setting all controllers temperature threshold low to trigger AER 00:08:41.973 Waiting for all controllers temperature threshold to be set lower 00:08:41.973 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.973 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:41.973 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.973 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:41.973 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.973 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:41.973 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.973 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:41.973 Waiting for all controllers to trigger AER and reset threshold 00:08:41.973 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.973 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.973 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.973 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.973 Cleaning up... 00:08:42.231 00:08:42.231 real 0m0.446s 00:08:42.231 user 0m0.136s 00:08:42.231 sys 0m0.198s 00:08:42.231 ************************************ 00:08:42.231 END TEST nvme_multi_aen 00:08:42.231 17:39:05 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.232 17:39:05 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:42.232 ************************************ 00:08:42.232 17:39:05 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:42.232 17:39:05 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:42.232 17:39:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.232 17:39:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.232 ************************************ 00:08:42.232 START TEST nvme_startup 00:08:42.232 ************************************ 00:08:42.232 17:39:05 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:42.490 Initializing NVMe Controllers 00:08:42.490 Attached to 0000:00:10.0 00:08:42.490 Attached to 0000:00:11.0 00:08:42.490 Attached to 0000:00:13.0 00:08:42.490 Attached to 0000:00:12.0 00:08:42.490 Initialization complete. 00:08:42.490 Time used:145269.984 (us). 00:08:42.490 00:08:42.490 real 0m0.228s 00:08:42.490 user 0m0.075s 00:08:42.490 sys 0m0.112s 00:08:42.490 17:39:05 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.490 ************************************ 00:08:42.490 17:39:05 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:42.490 END TEST nvme_startup 00:08:42.490 ************************************ 00:08:42.490 17:39:05 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:42.490 17:39:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.490 17:39:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.490 17:39:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.490 ************************************ 00:08:42.490 START TEST nvme_multi_secondary 00:08:42.490 ************************************ 00:08:42.490 17:39:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:42.490 17:39:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63973 00:08:42.490 17:39:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:42.490 17:39:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63974 00:08:42.490 17:39:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:42.490 17:39:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:45.771 Initializing NVMe Controllers 00:08:45.771 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:45.771 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.771 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.771 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.771 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:45.771 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:45.771 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:45.771 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:45.771 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:45.771 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:45.771 Initialization complete. Launching workers. 00:08:45.771 ======================================================== 00:08:45.771 Latency(us) 00:08:45.771 Device Information : IOPS MiB/s Average min max 00:08:45.771 PCIE (0000:00:10.0) NSID 1 from core 1: 7218.33 28.20 2215.20 882.85 6044.97 00:08:45.771 PCIE (0000:00:11.0) NSID 1 from core 1: 7218.33 28.20 2216.41 905.05 5695.57 00:08:45.771 PCIE (0000:00:13.0) NSID 1 from core 1: 7218.33 28.20 2216.59 929.27 5929.25 00:08:45.771 PCIE (0000:00:12.0) NSID 1 from core 1: 7218.33 28.20 2216.70 942.28 5857.11 00:08:45.771 PCIE (0000:00:12.0) NSID 2 from core 1: 7218.33 28.20 2216.84 948.49 5607.03 00:08:45.771 PCIE (0000:00:12.0) NSID 3 from core 1: 7218.33 28.20 2216.83 905.57 6300.03 00:08:45.771 ======================================================== 00:08:45.771 Total : 43309.95 169.18 2216.43 882.85 6300.03 00:08:45.771 00:08:45.771 Initializing NVMe Controllers 00:08:45.771 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:45.771 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:45.771 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:45.771 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:45.771 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:45.771 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:45.771 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:45.771 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:45.771 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:45.771 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:45.771 Initialization complete. Launching workers. 00:08:45.771 ======================================================== 00:08:45.771 Latency(us) 00:08:45.771 Device Information : IOPS MiB/s Average min max 00:08:45.771 PCIE (0000:00:10.0) NSID 1 from core 2: 2906.57 11.35 5502.95 1442.18 17644.69 00:08:45.771 PCIE (0000:00:11.0) NSID 1 from core 2: 2906.57 11.35 5504.03 1372.87 17576.67 00:08:45.771 PCIE (0000:00:13.0) NSID 1 from core 2: 2906.57 11.35 5504.50 1359.98 14146.85 00:08:45.771 PCIE (0000:00:12.0) NSID 1 from core 2: 2906.57 11.35 5504.50 1346.07 14289.27 00:08:45.771 PCIE (0000:00:12.0) NSID 2 from core 2: 2906.57 11.35 5504.45 1355.90 17495.97 00:08:45.771 PCIE (0000:00:12.0) NSID 3 from core 2: 2906.57 11.35 5504.47 1141.32 17380.21 00:08:45.771 ======================================================== 00:08:45.771 Total : 17439.41 68.12 5504.15 1141.32 17644.69 00:08:45.771 00:08:45.771 17:39:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63973 00:08:47.668 Initializing NVMe Controllers 00:08:47.668 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.668 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.668 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:47.668 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.668 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:47.668 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:47.668 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:47.668 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:47.668 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:47.668 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:47.668 Initialization complete. Launching workers. 00:08:47.668 ======================================================== 00:08:47.668 Latency(us) 00:08:47.668 Device Information : IOPS MiB/s Average min max 00:08:47.668 PCIE (0000:00:10.0) NSID 1 from core 0: 9801.52 38.29 1631.03 715.84 6774.13 00:08:47.668 PCIE (0000:00:11.0) NSID 1 from core 0: 9801.52 38.29 1631.94 707.91 6547.92 00:08:47.668 PCIE (0000:00:13.0) NSID 1 from core 0: 9801.52 38.29 1631.89 731.37 6696.97 00:08:47.668 PCIE (0000:00:12.0) NSID 1 from core 0: 9801.52 38.29 1631.84 725.59 6855.58 00:08:47.668 PCIE (0000:00:12.0) NSID 2 from core 0: 9801.52 38.29 1631.79 721.90 6535.83 00:08:47.668 PCIE (0000:00:12.0) NSID 3 from core 0: 9801.52 38.29 1631.74 645.41 6266.07 00:08:47.668 ======================================================== 00:08:47.668 Total : 58809.14 229.72 1631.71 645.41 6855.58 00:08:47.668 00:08:47.668 17:39:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63974 00:08:47.668 17:39:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64043 00:08:47.668 17:39:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:47.668 17:39:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64044 00:08:47.668 17:39:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:47.668 17:39:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:50.973 Initializing NVMe Controllers 00:08:50.973 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.973 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.973 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.973 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.973 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:50.973 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:50.973 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:50.973 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:50.973 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:50.973 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:50.973 Initialization complete. Launching workers. 00:08:50.973 ======================================================== 00:08:50.973 Latency(us) 00:08:50.973 Device Information : IOPS MiB/s Average min max 00:08:50.973 PCIE (0000:00:10.0) NSID 1 from core 1: 7038.77 27.50 2271.77 877.13 6988.33 00:08:50.973 PCIE (0000:00:11.0) NSID 1 from core 1: 7038.77 27.50 2272.76 894.50 6970.69 00:08:50.973 PCIE (0000:00:13.0) NSID 1 from core 1: 7038.77 27.50 2272.73 905.95 7106.68 00:08:50.973 PCIE (0000:00:12.0) NSID 1 from core 1: 7038.77 27.50 2272.86 892.86 6660.99 00:08:50.973 PCIE (0000:00:12.0) NSID 2 from core 1: 7038.77 27.50 2273.01 868.67 7683.77 00:08:50.973 PCIE (0000:00:12.0) NSID 3 from core 1: 7038.77 27.50 2273.15 906.18 7527.87 00:08:50.973 ======================================================== 00:08:50.973 Total : 42232.62 164.97 2272.71 868.67 7683.77 00:08:50.973 00:08:50.973 Initializing NVMe Controllers 00:08:50.973 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.973 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.973 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.973 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.973 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:50.973 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:50.973 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:50.973 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:50.973 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:50.973 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:50.973 Initialization complete. Launching workers. 00:08:50.973 ======================================================== 00:08:50.973 Latency(us) 00:08:50.973 Device Information : IOPS MiB/s Average min max 00:08:50.973 PCIE (0000:00:10.0) NSID 1 from core 0: 6942.96 27.12 2302.95 757.22 7932.03 00:08:50.973 PCIE (0000:00:11.0) NSID 1 from core 0: 6942.96 27.12 2303.98 776.56 7870.76 00:08:50.973 PCIE (0000:00:13.0) NSID 1 from core 0: 6942.96 27.12 2303.90 793.26 9710.02 00:08:50.973 PCIE (0000:00:12.0) NSID 1 from core 0: 6942.96 27.12 2303.82 766.04 10922.62 00:08:50.973 PCIE (0000:00:12.0) NSID 2 from core 0: 6942.96 27.12 2303.74 772.76 11534.13 00:08:50.973 PCIE (0000:00:12.0) NSID 3 from core 0: 6948.29 27.14 2301.90 774.64 7842.62 00:08:50.973 ======================================================== 00:08:50.973 Total : 41663.10 162.75 2303.38 757.22 11534.13 00:08:50.973 00:08:52.991 Initializing NVMe Controllers 00:08:52.991 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.991 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.991 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.991 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.991 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:52.991 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:52.991 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:52.991 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:52.991 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:52.991 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:52.991 Initialization complete. Launching workers. 00:08:52.991 ======================================================== 00:08:52.991 Latency(us) 00:08:52.991 Device Information : IOPS MiB/s Average min max 00:08:52.991 PCIE (0000:00:10.0) NSID 1 from core 2: 4296.78 16.78 3721.62 801.89 13003.34 00:08:52.991 PCIE (0000:00:11.0) NSID 1 from core 2: 4296.78 16.78 3723.12 825.06 16168.57 00:08:52.991 PCIE (0000:00:13.0) NSID 1 from core 2: 4296.78 16.78 3722.50 799.75 15697.46 00:08:52.991 PCIE (0000:00:12.0) NSID 1 from core 2: 4296.78 16.78 3722.61 791.39 13769.79 00:08:52.991 PCIE (0000:00:12.0) NSID 2 from core 2: 4296.78 16.78 3722.73 722.58 14422.87 00:08:52.991 PCIE (0000:00:12.0) NSID 3 from core 2: 4296.78 16.78 3722.67 647.00 18866.14 00:08:52.991 ======================================================== 00:08:52.991 Total : 25780.65 100.71 3722.54 647.00 18866.14 00:08:52.991 00:08:52.991 17:39:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64043 00:08:52.991 17:39:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64044 00:08:52.991 00:08:52.991 real 0m10.703s 00:08:52.991 user 0m18.416s 00:08:52.991 sys 0m0.641s 00:08:52.991 17:39:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.991 17:39:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:52.991 ************************************ 00:08:52.991 END TEST nvme_multi_secondary 00:08:52.991 ************************************ 00:08:53.250 17:39:16 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:53.250 17:39:16 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:53.250 17:39:16 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62999 ]] 00:08:53.250 17:39:16 nvme -- common/autotest_common.sh@1094 -- # kill 62999 00:08:53.250 17:39:16 nvme -- common/autotest_common.sh@1095 -- # wait 62999 00:08:53.250 [2024-11-20 17:39:16.563453] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.563507] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.563527] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.563539] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.565257] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.565295] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.565307] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.565320] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.566938] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.566976] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.566987] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.566999] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.568667] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.568702] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.568712] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 [2024-11-20 17:39:16.568722] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63916) is not found. Dropping the request. 00:08:53.250 17:39:16 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:53.250 17:39:16 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:53.250 17:39:16 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:53.250 17:39:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.250 17:39:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.250 17:39:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.250 ************************************ 00:08:53.250 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:53.250 ************************************ 00:08:53.250 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:53.250 * Looking for test storage... 00:08:53.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:53.250 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:53.250 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:53.250 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:53.509 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:53.509 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.509 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.509 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.509 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.509 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:53.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.510 --rc genhtml_branch_coverage=1 00:08:53.510 --rc genhtml_function_coverage=1 00:08:53.510 --rc genhtml_legend=1 00:08:53.510 --rc geninfo_all_blocks=1 00:08:53.510 --rc geninfo_unexecuted_blocks=1 00:08:53.510 00:08:53.510 ' 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:53.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.510 --rc genhtml_branch_coverage=1 00:08:53.510 --rc genhtml_function_coverage=1 00:08:53.510 --rc genhtml_legend=1 00:08:53.510 --rc geninfo_all_blocks=1 00:08:53.510 --rc geninfo_unexecuted_blocks=1 00:08:53.510 00:08:53.510 ' 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:53.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.510 --rc genhtml_branch_coverage=1 00:08:53.510 --rc genhtml_function_coverage=1 00:08:53.510 --rc genhtml_legend=1 00:08:53.510 --rc geninfo_all_blocks=1 00:08:53.510 --rc geninfo_unexecuted_blocks=1 00:08:53.510 00:08:53.510 ' 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:53.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.510 --rc genhtml_branch_coverage=1 00:08:53.510 --rc genhtml_function_coverage=1 00:08:53.510 --rc genhtml_legend=1 00:08:53.510 --rc geninfo_all_blocks=1 00:08:53.510 --rc geninfo_unexecuted_blocks=1 00:08:53.510 00:08:53.510 ' 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64207 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64207 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64207 ']' 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.510 17:39:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:53.510 [2024-11-20 17:39:17.005488] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:08:53.510 [2024-11-20 17:39:17.005637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64207 ] 00:08:53.769 [2024-11-20 17:39:17.187761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.769 [2024-11-20 17:39:17.300345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.769 [2024-11-20 17:39:17.300659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.769 [2024-11-20 17:39:17.301103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.769 [2024-11-20 17:39:17.301112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.705 17:39:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.705 17:39:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:54.705 17:39:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:54.705 17:39:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.705 17:39:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:54.705 nvme0n1 00:08:54.705 17:39:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.705 17:39:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:54.705 17:39:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_xxdC4.txt 00:08:54.705 17:39:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:54.705 17:39:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.705 17:39:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:54.705 true 00:08:54.705 17:39:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.705 17:39:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:54.705 17:39:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732124358 00:08:54.705 17:39:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64230 00:08:54.705 17:39:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:54.705 17:39:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:54.705 17:39:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:56.614 [2024-11-20 17:39:20.025028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:56.614 [2024-11-20 17:39:20.025316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:56.614 [2024-11-20 17:39:20.025342] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:56.614 [2024-11-20 17:39:20.025357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.614 [2024-11-20 17:39:20.029096] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:56.614 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64230 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64230 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64230 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_xxdC4.txt 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_xxdC4.txt 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64207 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64207 ']' 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64207 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64207 00:08:56.614 killing process with pid 64207 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64207' 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64207 00:08:56.614 17:39:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64207 00:08:58.579 17:39:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:58.579 17:39:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:58.579 00:08:58.579 real 0m4.984s 00:08:58.579 user 0m17.492s 00:08:58.579 sys 0m0.560s 00:08:58.579 17:39:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.579 ************************************ 00:08:58.579 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:58.579 ************************************ 00:08:58.579 17:39:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:58.579 17:39:21 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:58.579 17:39:21 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:58.579 17:39:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.579 17:39:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.579 17:39:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.579 ************************************ 00:08:58.579 START TEST nvme_fio 00:08:58.579 ************************************ 00:08:58.579 17:39:21 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:58.579 17:39:21 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:58.579 17:39:21 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:58.579 17:39:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:58.579 17:39:21 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:58.579 17:39:21 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:58.579 17:39:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:58.579 17:39:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:58.579 17:39:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:58.579 17:39:21 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:58.579 17:39:21 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:58.579 17:39:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:58.579 17:39:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:58.579 17:39:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:58.579 17:39:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:58.579 17:39:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:58.579 17:39:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:58.579 17:39:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:58.840 17:39:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:58.840 17:39:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:58.840 17:39:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:59.101 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:59.101 fio-3.35 00:08:59.101 Starting 1 thread 00:09:05.760 00:09:05.760 test: (groupid=0, jobs=1): err= 0: pid=64369: Wed Nov 20 17:39:28 2024 00:09:05.760 read: IOPS=19.2k, BW=74.9MiB/s (78.6MB/s)(150MiB/2001msec) 00:09:05.760 slat (nsec): min=4799, max=64195, avg=6011.58, stdev=2289.64 00:09:05.760 clat (usec): min=243, max=43889, avg=3313.59, stdev=995.32 00:09:05.760 lat (usec): min=248, max=43894, avg=3319.60, stdev=996.26 00:09:05.760 clat percentiles (usec): 00:09:05.760 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2671], 00:09:05.760 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 3097], 00:09:05.760 | 70.00th=[ 3326], 80.00th=[ 3752], 90.00th=[ 4752], 95.00th=[ 5473], 00:09:05.760 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[ 8356], 99.95th=[ 8586], 00:09:05.760 | 99.99th=[ 9503] 00:09:05.760 bw ( KiB/s): min=73040, max=76800, per=98.11%, avg=75264.00, stdev=1972.16, samples=3 00:09:05.760 iops : min=18260, max=19200, avg=18816.00, stdev=493.04, samples=3 00:09:05.760 write: IOPS=19.2k, BW=74.8MiB/s (78.5MB/s)(150MiB/2001msec); 0 zone resets 00:09:05.760 slat (nsec): min=4890, max=68192, avg=6266.61, stdev=2293.64 00:09:05.760 clat (usec): min=216, max=9257, avg=3337.49, stdev=970.39 00:09:05.760 lat (usec): min=221, max=9277, avg=3343.76, stdev=971.33 00:09:05.760 clat percentiles (usec): 00:09:05.760 | 1.00th=[ 2311], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2704], 00:09:05.760 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3130], 00:09:05.760 | 70.00th=[ 3326], 80.00th=[ 3752], 90.00th=[ 4752], 95.00th=[ 5538], 00:09:05.760 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 8291], 99.95th=[ 8717], 00:09:05.760 | 99.99th=[ 9241] 00:09:05.760 bw ( KiB/s): min=73064, max=77024, per=98.27%, avg=75314.67, stdev=2034.74, samples=3 00:09:05.760 iops : min=18266, max=19256, avg=18828.67, stdev=508.69, samples=3 00:09:05.760 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:05.760 lat (msec) : 2=0.13%, 4=82.98%, 10=16.85%, 50=0.01% 00:09:05.760 cpu : usr=99.05%, sys=0.05%, ctx=3, majf=0, minf=607 00:09:05.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:05.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.760 issued rwts: total=38377,38340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.760 00:09:05.760 Run status group 0 (all jobs): 00:09:05.760 READ: bw=74.9MiB/s (78.6MB/s), 74.9MiB/s-74.9MiB/s (78.6MB/s-78.6MB/s), io=150MiB (157MB), run=2001-2001msec 00:09:05.760 WRITE: bw=74.8MiB/s (78.5MB/s), 74.8MiB/s-74.8MiB/s (78.5MB/s-78.5MB/s), io=150MiB (157MB), run=2001-2001msec 00:09:05.760 ----------------------------------------------------- 00:09:05.760 Suppressions used: 00:09:05.760 count bytes template 00:09:05.760 1 32 /usr/src/fio/parse.c 00:09:05.760 1 8 libtcmalloc_minimal.so 00:09:05.760 ----------------------------------------------------- 00:09:05.760 00:09:05.760 17:39:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:05.760 17:39:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:05.760 17:39:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:05.760 17:39:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:05.760 17:39:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:05.760 17:39:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:05.760 17:39:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:05.760 17:39:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:05.760 17:39:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:05.760 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:05.760 fio-3.35 00:09:05.760 Starting 1 thread 00:09:15.758 00:09:15.758 test: (groupid=0, jobs=1): err= 0: pid=64430: Wed Nov 20 17:39:38 2024 00:09:15.758 read: IOPS=18.7k, BW=73.1MiB/s (76.7MB/s)(146MiB/2001msec) 00:09:15.758 slat (usec): min=3, max=145, avg= 5.68, stdev= 2.97 00:09:15.758 clat (usec): min=261, max=8655, avg=3395.80, stdev=1170.57 00:09:15.758 lat (usec): min=265, max=8672, avg=3401.48, stdev=1171.89 00:09:15.758 clat percentiles (usec): 00:09:15.758 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:15.758 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2966], 60.00th=[ 3163], 00:09:15.758 | 70.00th=[ 3490], 80.00th=[ 4178], 90.00th=[ 5211], 95.00th=[ 5932], 00:09:15.758 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[ 8291], 99.95th=[ 8455], 00:09:15.758 | 99.99th=[ 8586] 00:09:15.758 bw ( KiB/s): min=66360, max=80040, per=98.24%, avg=73557.33, stdev=6867.94, samples=3 00:09:15.758 iops : min=16590, max=20010, avg=18389.33, stdev=1716.99, samples=3 00:09:15.758 write: IOPS=18.7k, BW=73.1MiB/s (76.7MB/s)(146MiB/2001msec); 0 zone resets 00:09:15.758 slat (nsec): min=3445, max=91143, avg=5919.22, stdev=2862.11 00:09:15.758 clat (usec): min=216, max=9069, avg=3413.95, stdev=1158.02 00:09:15.758 lat (usec): min=222, max=9075, avg=3419.87, stdev=1159.32 00:09:15.758 clat percentiles (usec): 00:09:15.758 | 1.00th=[ 2089], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2573], 00:09:15.758 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2999], 60.00th=[ 3195], 00:09:15.758 | 70.00th=[ 3490], 80.00th=[ 4178], 90.00th=[ 5276], 95.00th=[ 5932], 00:09:15.758 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[ 8291], 99.95th=[ 8455], 00:09:15.758 | 99.99th=[ 8586] 00:09:15.758 bw ( KiB/s): min=66360, max=79688, per=98.10%, avg=73464.00, stdev=6707.44, samples=3 00:09:15.758 iops : min=16590, max=19922, avg=18366.00, stdev=1676.86, samples=3 00:09:15.758 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.02% 00:09:15.758 lat (msec) : 2=0.68%, 4=77.06%, 10=22.21% 00:09:15.758 cpu : usr=98.95%, sys=0.10%, ctx=13, majf=0, minf=607 00:09:15.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:15.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.758 issued rwts: total=37457,37463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.758 00:09:15.758 Run status group 0 (all jobs): 00:09:15.758 READ: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=146MiB (153MB), run=2001-2001msec 00:09:15.758 WRITE: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=146MiB (153MB), run=2001-2001msec 00:09:15.758 ----------------------------------------------------- 00:09:15.758 Suppressions used: 00:09:15.758 count bytes template 00:09:15.758 1 32 /usr/src/fio/parse.c 00:09:15.758 1 8 libtcmalloc_minimal.so 00:09:15.758 ----------------------------------------------------- 00:09:15.758 00:09:15.758 17:39:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:15.758 17:39:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:15.758 17:39:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:15.758 17:39:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:16.019 17:39:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:16.019 17:39:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:16.281 17:39:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:16.281 17:39:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:16.281 17:39:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:16.281 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:16.281 fio-3.35 00:09:16.281 Starting 1 thread 00:09:24.428 00:09:24.428 test: (groupid=0, jobs=1): err= 0: pid=64486: Wed Nov 20 17:39:47 2024 00:09:24.428 read: IOPS=21.9k, BW=85.6MiB/s (89.8MB/s)(171MiB/2001msec) 00:09:24.428 slat (nsec): min=3395, max=87010, avg=5048.85, stdev=2101.73 00:09:24.428 clat (usec): min=206, max=8124, avg=2901.44, stdev=817.19 00:09:24.428 lat (usec): min=211, max=8196, avg=2906.48, stdev=818.33 00:09:24.428 clat percentiles (usec): 00:09:24.428 | 1.00th=[ 1958], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:24.428 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2802], 00:09:24.428 | 70.00th=[ 2900], 80.00th=[ 3097], 90.00th=[ 3654], 95.00th=[ 4752], 00:09:24.428 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 7439], 00:09:24.428 | 99.99th=[ 8029] 00:09:24.428 bw ( KiB/s): min=85352, max=88976, per=99.28%, avg=87045.33, stdev=1823.62, samples=3 00:09:24.428 iops : min=21338, max=22244, avg=21761.33, stdev=455.90, samples=3 00:09:24.428 write: IOPS=21.8k, BW=85.0MiB/s (89.2MB/s)(170MiB/2001msec); 0 zone resets 00:09:24.428 slat (nsec): min=3506, max=48456, avg=5319.04, stdev=2070.11 00:09:24.428 clat (usec): min=244, max=8137, avg=2934.99, stdev=817.88 00:09:24.428 lat (usec): min=248, max=8142, avg=2940.31, stdev=819.03 00:09:24.428 clat percentiles (usec): 00:09:24.428 | 1.00th=[ 2024], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:24.428 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2835], 00:09:24.428 | 70.00th=[ 2933], 80.00th=[ 3130], 90.00th=[ 3654], 95.00th=[ 4817], 00:09:24.428 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7308], 99.95th=[ 7504], 00:09:24.428 | 99.99th=[ 7832] 00:09:24.428 bw ( KiB/s): min=85080, max=88912, per=100.00%, avg=87232.00, stdev=1959.12, samples=3 00:09:24.428 iops : min=21270, max=22228, avg=21808.00, stdev=489.78, samples=3 00:09:24.428 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:09:24.428 lat (msec) : 2=1.07%, 4=91.00%, 10=7.88% 00:09:24.428 cpu : usr=99.15%, sys=0.20%, ctx=9, majf=0, minf=608 00:09:24.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:24.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.428 issued rwts: total=43860,43565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.428 00:09:24.428 Run status group 0 (all jobs): 00:09:24.428 READ: bw=85.6MiB/s (89.8MB/s), 85.6MiB/s-85.6MiB/s (89.8MB/s-89.8MB/s), io=171MiB (180MB), run=2001-2001msec 00:09:24.428 WRITE: bw=85.0MiB/s (89.2MB/s), 85.0MiB/s-85.0MiB/s (89.2MB/s-89.2MB/s), io=170MiB (178MB), run=2001-2001msec 00:09:24.428 ----------------------------------------------------- 00:09:24.428 Suppressions used: 00:09:24.428 count bytes template 00:09:24.428 1 32 /usr/src/fio/parse.c 00:09:24.428 1 8 libtcmalloc_minimal.so 00:09:24.428 ----------------------------------------------------- 00:09:24.428 00:09:24.428 17:39:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:24.428 17:39:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:24.428 17:39:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:24.428 17:39:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:24.428 17:39:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:24.428 17:39:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:24.428 17:39:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:24.428 17:39:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:24.428 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:24.428 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:24.428 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:24.428 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:24.428 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.428 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:24.428 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:24.429 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:24.429 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.429 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:24.429 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:24.429 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:24.429 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:24.429 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:24.429 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:24.429 17:39:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:24.691 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:24.691 fio-3.35 00:09:24.691 Starting 1 thread 00:09:34.704 00:09:34.704 test: (groupid=0, jobs=1): err= 0: pid=64542: Wed Nov 20 17:39:56 2024 00:09:34.704 read: IOPS=17.8k, BW=69.5MiB/s (72.9MB/s)(139MiB/2001msec) 00:09:34.704 slat (usec): min=3, max=119, avg= 6.45, stdev= 3.14 00:09:34.704 clat (usec): min=718, max=12221, avg=3574.64, stdev=1189.22 00:09:34.704 lat (usec): min=725, max=12260, avg=3581.09, stdev=1190.59 00:09:34.704 clat percentiles (usec): 00:09:34.704 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2704], 00:09:34.704 | 30.00th=[ 2835], 40.00th=[ 2966], 50.00th=[ 3130], 60.00th=[ 3359], 00:09:34.704 | 70.00th=[ 3752], 80.00th=[ 4424], 90.00th=[ 5342], 95.00th=[ 6128], 00:09:34.704 | 99.00th=[ 7439], 99.50th=[ 7898], 99.90th=[ 9241], 99.95th=[10683], 00:09:34.704 | 99.99th=[11469] 00:09:34.704 bw ( KiB/s): min=65600, max=75792, per=97.39%, avg=69341.33, stdev=5610.22, samples=3 00:09:34.704 iops : min=16400, max=18948, avg=17335.33, stdev=1402.56, samples=3 00:09:34.704 write: IOPS=17.8k, BW=69.5MiB/s (72.9MB/s)(139MiB/2001msec); 0 zone resets 00:09:34.704 slat (nsec): min=3631, max=83019, avg=6756.12, stdev=3061.59 00:09:34.704 clat (usec): min=309, max=11660, avg=3591.13, stdev=1184.62 00:09:34.704 lat (usec): min=330, max=11666, avg=3597.88, stdev=1185.94 00:09:34.704 clat percentiles (usec): 00:09:34.704 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2737], 00:09:34.704 | 30.00th=[ 2868], 40.00th=[ 2999], 50.00th=[ 3130], 60.00th=[ 3359], 00:09:34.704 | 70.00th=[ 3785], 80.00th=[ 4424], 90.00th=[ 5407], 95.00th=[ 6063], 00:09:34.704 | 99.00th=[ 7439], 99.50th=[ 7963], 99.90th=[ 9634], 99.95th=[10683], 00:09:34.704 | 99.99th=[11469] 00:09:34.704 bw ( KiB/s): min=65448, max=75480, per=97.29%, avg=69266.67, stdev=5427.81, samples=3 00:09:34.704 iops : min=16362, max=18870, avg=17316.67, stdev=1356.95, samples=3 00:09:34.704 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:34.704 lat (msec) : 2=0.22%, 4=73.66%, 10=26.03%, 20=0.08% 00:09:34.704 cpu : usr=99.00%, sys=0.00%, ctx=4, majf=0, minf=606 00:09:34.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:34.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.704 issued rwts: total=35617,35617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.704 00:09:34.704 Run status group 0 (all jobs): 00:09:34.704 READ: bw=69.5MiB/s (72.9MB/s), 69.5MiB/s-69.5MiB/s (72.9MB/s-72.9MB/s), io=139MiB (146MB), run=2001-2001msec 00:09:34.704 WRITE: bw=69.5MiB/s (72.9MB/s), 69.5MiB/s-69.5MiB/s (72.9MB/s-72.9MB/s), io=139MiB (146MB), run=2001-2001msec 00:09:34.704 ----------------------------------------------------- 00:09:34.704 Suppressions used: 00:09:34.704 count bytes template 00:09:34.704 1 32 /usr/src/fio/parse.c 00:09:34.704 1 8 libtcmalloc_minimal.so 00:09:34.704 ----------------------------------------------------- 00:09:34.704 00:09:34.704 17:39:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:34.704 17:39:57 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:34.704 00:09:34.704 real 0m35.452s 00:09:34.704 user 0m17.052s 00:09:34.704 sys 0m35.706s 00:09:34.704 17:39:57 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.704 ************************************ 00:09:34.704 END TEST nvme_fio 00:09:34.704 ************************************ 00:09:34.704 17:39:57 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:34.704 00:09:34.704 real 1m45.484s 00:09:34.705 user 3m39.649s 00:09:34.705 sys 0m46.454s 00:09:34.705 17:39:57 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.705 ************************************ 00:09:34.705 END TEST nvme 00:09:34.705 ************************************ 00:09:34.705 17:39:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:34.705 17:39:57 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:34.705 17:39:57 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:34.705 17:39:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.705 17:39:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.705 17:39:57 -- common/autotest_common.sh@10 -- # set +x 00:09:34.705 ************************************ 00:09:34.705 START TEST nvme_scc 00:09:34.705 ************************************ 00:09:34.705 17:39:57 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:34.705 * Looking for test storage... 00:09:34.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.705 17:39:57 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.705 17:39:57 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.705 17:39:57 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.705 17:39:57 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:34.705 17:39:57 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.705 17:39:57 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.705 --rc genhtml_branch_coverage=1 00:09:34.705 --rc genhtml_function_coverage=1 00:09:34.705 --rc genhtml_legend=1 00:09:34.705 --rc geninfo_all_blocks=1 00:09:34.705 --rc geninfo_unexecuted_blocks=1 00:09:34.705 00:09:34.705 ' 00:09:34.705 17:39:57 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.705 --rc genhtml_branch_coverage=1 00:09:34.705 --rc genhtml_function_coverage=1 00:09:34.705 --rc genhtml_legend=1 00:09:34.705 --rc geninfo_all_blocks=1 00:09:34.705 --rc geninfo_unexecuted_blocks=1 00:09:34.705 00:09:34.705 ' 00:09:34.705 17:39:57 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.705 --rc genhtml_branch_coverage=1 00:09:34.705 --rc genhtml_function_coverage=1 00:09:34.705 --rc genhtml_legend=1 00:09:34.705 --rc geninfo_all_blocks=1 00:09:34.705 --rc geninfo_unexecuted_blocks=1 00:09:34.705 00:09:34.705 ' 00:09:34.705 17:39:57 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.705 --rc genhtml_branch_coverage=1 00:09:34.705 --rc genhtml_function_coverage=1 00:09:34.705 --rc genhtml_legend=1 00:09:34.705 --rc geninfo_all_blocks=1 00:09:34.705 --rc geninfo_unexecuted_blocks=1 00:09:34.705 00:09:34.705 ' 00:09:34.705 17:39:57 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:34.705 17:39:57 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:34.705 17:39:57 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:34.705 17:39:57 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:34.705 17:39:57 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.705 17:39:57 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.705 17:39:57 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.705 17:39:57 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.705 17:39:57 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.705 17:39:57 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:34.705 17:39:57 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.705 17:39:57 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:34.705 17:39:57 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:34.705 17:39:57 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:34.705 17:39:57 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:34.706 17:39:57 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:34.706 17:39:57 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:34.706 17:39:57 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:34.706 17:39:57 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:34.706 17:39:57 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:34.706 17:39:57 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.706 17:39:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:34.706 17:39:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:34.706 17:39:57 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:34.706 17:39:57 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:34.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.706 Waiting for block devices as requested 00:09:34.706 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.706 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.706 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.706 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:40.007 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:40.007 17:40:03 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:40.007 17:40:03 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:40.007 17:40:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.007 17:40:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:40.007 17:40:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:40.008 17:40:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:40.008 17:40:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:40.008 17:40:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.008 17:40:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.008 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.009 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.010 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.011 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.012 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.013 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:40.014 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:40.015 17:40:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:40.015 17:40:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:40.015 17:40:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.015 17:40:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.015 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.016 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.017 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.018 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.019 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:40.020 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.021 17:40:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:40.022 17:40:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:40.022 17:40:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:40.022 17:40:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.022 17:40:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.022 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.023 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.024 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.025 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.026 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.027 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:40.028 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.029 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:40.294 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:40.295 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.296 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.297 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.298 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.299 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.300 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:40.301 17:40:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:40.301 17:40:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:40.301 17:40:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.301 17:40:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.301 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.302 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:40.303 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:40.304 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:40.305 17:40:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:40.305 17:40:03 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:40.306 17:40:03 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:40.306 17:40:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:40.306 17:40:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:40.306 17:40:03 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:40.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.132 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.132 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.132 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.132 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.132 17:40:04 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:41.132 17:40:04 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:41.132 17:40:04 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.132 17:40:04 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:41.132 ************************************ 00:09:41.132 START TEST nvme_simple_copy 00:09:41.132 ************************************ 00:09:41.132 17:40:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:41.393 Initializing NVMe Controllers 00:09:41.393 Attaching to 0000:00:10.0 00:09:41.393 Controller supports SCC. Attached to 0000:00:10.0 00:09:41.393 Namespace ID: 1 size: 6GB 00:09:41.393 Initialization complete. 00:09:41.393 00:09:41.393 Controller QEMU NVMe Ctrl (12340 ) 00:09:41.393 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:41.393 Namespace Block Size:4096 00:09:41.393 Writing LBAs 0 to 63 with Random Data 00:09:41.393 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:41.393 LBAs matching Written Data: 64 00:09:41.393 00:09:41.393 real 0m0.270s 00:09:41.393 user 0m0.105s 00:09:41.393 sys 0m0.063s 00:09:41.393 17:40:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.393 ************************************ 00:09:41.393 END TEST nvme_simple_copy 00:09:41.393 ************************************ 00:09:41.393 17:40:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:41.653 00:09:41.653 real 0m7.649s 00:09:41.653 user 0m1.147s 00:09:41.653 sys 0m1.387s 00:09:41.653 17:40:04 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.653 ************************************ 00:09:41.653 END TEST nvme_scc 00:09:41.653 17:40:04 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:41.653 ************************************ 00:09:41.653 17:40:04 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:41.653 17:40:04 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:41.653 17:40:04 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:41.653 17:40:04 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:41.653 17:40:04 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:41.653 17:40:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.653 17:40:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.653 17:40:04 -- common/autotest_common.sh@10 -- # set +x 00:09:41.653 ************************************ 00:09:41.653 START TEST nvme_fdp 00:09:41.653 ************************************ 00:09:41.653 17:40:04 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:41.653 * Looking for test storage... 00:09:41.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:41.653 17:40:05 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:41.653 17:40:05 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:41.653 17:40:05 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:41.653 17:40:05 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.653 17:40:05 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:41.653 17:40:05 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.653 17:40:05 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:41.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.653 --rc genhtml_branch_coverage=1 00:09:41.653 --rc genhtml_function_coverage=1 00:09:41.653 --rc genhtml_legend=1 00:09:41.653 --rc geninfo_all_blocks=1 00:09:41.653 --rc geninfo_unexecuted_blocks=1 00:09:41.653 00:09:41.653 ' 00:09:41.653 17:40:05 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:41.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.653 --rc genhtml_branch_coverage=1 00:09:41.653 --rc genhtml_function_coverage=1 00:09:41.653 --rc genhtml_legend=1 00:09:41.653 --rc geninfo_all_blocks=1 00:09:41.653 --rc geninfo_unexecuted_blocks=1 00:09:41.653 00:09:41.653 ' 00:09:41.653 17:40:05 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:41.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.653 --rc genhtml_branch_coverage=1 00:09:41.653 --rc genhtml_function_coverage=1 00:09:41.653 --rc genhtml_legend=1 00:09:41.653 --rc geninfo_all_blocks=1 00:09:41.653 --rc geninfo_unexecuted_blocks=1 00:09:41.653 00:09:41.653 ' 00:09:41.653 17:40:05 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:41.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.654 --rc genhtml_branch_coverage=1 00:09:41.654 --rc genhtml_function_coverage=1 00:09:41.654 --rc genhtml_legend=1 00:09:41.654 --rc geninfo_all_blocks=1 00:09:41.654 --rc geninfo_unexecuted_blocks=1 00:09:41.654 00:09:41.654 ' 00:09:41.654 17:40:05 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.654 17:40:05 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.654 17:40:05 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.654 17:40:05 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.654 17:40:05 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.654 17:40:05 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.654 17:40:05 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.654 17:40:05 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.654 17:40:05 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:41.654 17:40:05 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:41.654 17:40:05 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:41.654 17:40:05 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.654 17:40:05 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:41.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.172 Waiting for block devices as requested 00:09:42.172 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.172 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.432 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.432 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.740 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:47.740 17:40:10 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:47.740 17:40:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:47.740 17:40:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:47.740 17:40:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.740 17:40:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:47.740 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:47.741 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.742 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.743 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.744 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.745 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:47.746 17:40:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:47.746 17:40:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:47.746 17:40:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.746 17:40:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:47.746 17:40:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:47.746 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.746 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.746 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.746 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:47.746 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:47.746 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.747 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.748 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:47.749 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.750 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.751 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:47.752 17:40:11 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:47.752 17:40:11 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:47.752 17:40:11 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.752 17:40:11 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:47.752 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.753 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.754 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:47.755 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.756 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.757 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:47.758 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.759 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.760 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.761 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.762 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.763 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:47.764 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:47.765 17:40:11 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:47.765 17:40:11 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:47.765 17:40:11 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.765 17:40:11 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.765 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.766 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.767 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:47.768 17:40:11 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:48.050 17:40:11 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:48.050 17:40:11 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:48.051 17:40:11 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:48.051 17:40:11 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:48.051 17:40:11 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:48.051 17:40:11 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:48.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:48.879 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:48.879 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:48.879 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:48.879 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:49.140 17:40:12 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:49.140 17:40:12 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:49.140 17:40:12 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.140 17:40:12 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:49.140 ************************************ 00:09:49.140 START TEST nvme_flexible_data_placement 00:09:49.140 ************************************ 00:09:49.140 17:40:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:49.401 Initializing NVMe Controllers 00:09:49.401 Attaching to 0000:00:13.0 00:09:49.401 Controller supports FDP Attached to 0000:00:13.0 00:09:49.401 Namespace ID: 1 Endurance Group ID: 1 00:09:49.401 Initialization complete. 00:09:49.401 00:09:49.401 ================================== 00:09:49.401 == FDP tests for Namespace: #01 == 00:09:49.401 ================================== 00:09:49.401 00:09:49.401 Get Feature: FDP: 00:09:49.401 ================= 00:09:49.401 Enabled: Yes 00:09:49.401 FDP configuration Index: 0 00:09:49.401 00:09:49.401 FDP configurations log page 00:09:49.401 =========================== 00:09:49.401 Number of FDP configurations: 1 00:09:49.401 Version: 0 00:09:49.401 Size: 112 00:09:49.401 FDP Configuration Descriptor: 0 00:09:49.401 Descriptor Size: 96 00:09:49.401 Reclaim Group Identifier format: 2 00:09:49.401 FDP Volatile Write Cache: Not Present 00:09:49.401 FDP Configuration: Valid 00:09:49.401 Vendor Specific Size: 0 00:09:49.401 Number of Reclaim Groups: 2 00:09:49.401 Number of Recalim Unit Handles: 8 00:09:49.401 Max Placement Identifiers: 128 00:09:49.401 Number of Namespaces Suppprted: 256 00:09:49.401 Reclaim unit Nominal Size: 6000000 bytes 00:09:49.401 Estimated Reclaim Unit Time Limit: Not Reported 00:09:49.401 RUH Desc #000: RUH Type: Initially Isolated 00:09:49.401 RUH Desc #001: RUH Type: Initially Isolated 00:09:49.401 RUH Desc #002: RUH Type: Initially Isolated 00:09:49.401 RUH Desc #003: RUH Type: Initially Isolated 00:09:49.401 RUH Desc #004: RUH Type: Initially Isolated 00:09:49.401 RUH Desc #005: RUH Type: Initially Isolated 00:09:49.401 RUH Desc #006: RUH Type: Initially Isolated 00:09:49.401 RUH Desc #007: RUH Type: Initially Isolated 00:09:49.401 00:09:49.401 FDP reclaim unit handle usage log page 00:09:49.401 ====================================== 00:09:49.401 Number of Reclaim Unit Handles: 8 00:09:49.401 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:49.401 RUH Usage Desc #001: RUH Attributes: Unused 00:09:49.401 RUH Usage Desc #002: RUH Attributes: Unused 00:09:49.401 RUH Usage Desc #003: RUH Attributes: Unused 00:09:49.401 RUH Usage Desc #004: RUH Attributes: Unused 00:09:49.401 RUH Usage Desc #005: RUH Attributes: Unused 00:09:49.401 RUH Usage Desc #006: RUH Attributes: Unused 00:09:49.401 RUH Usage Desc #007: RUH Attributes: Unused 00:09:49.401 00:09:49.401 FDP statistics log page 00:09:49.401 ======================= 00:09:49.401 Host bytes with metadata written: 825212928 00:09:49.401 Media bytes with metadata written: 825307136 00:09:49.401 Media bytes erased: 0 00:09:49.401 00:09:49.401 FDP Reclaim unit handle status 00:09:49.401 ============================== 00:09:49.401 Number of RUHS descriptors: 2 00:09:49.401 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004d04 00:09:49.401 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:49.401 00:09:49.401 FDP write on placement id: 0 success 00:09:49.401 00:09:49.401 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:49.401 00:09:49.401 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:49.401 00:09:49.401 Get Feature: FDP Events for Placement handle: #0 00:09:49.401 ======================== 00:09:49.401 Number of FDP Events: 6 00:09:49.401 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:49.401 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:49.401 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:49.401 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:49.401 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:49.401 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:49.401 00:09:49.401 FDP events log page 00:09:49.401 =================== 00:09:49.401 Number of FDP events: 1 00:09:49.401 FDP Event #0: 00:09:49.401 Event Type: RU Not Written to Capacity 00:09:49.401 Placement Identifier: Valid 00:09:49.401 NSID: Valid 00:09:49.401 Location: Valid 00:09:49.401 Placement Identifier: 0 00:09:49.401 Event Timestamp: 6 00:09:49.401 Namespace Identifier: 1 00:09:49.401 Reclaim Group Identifier: 0 00:09:49.401 Reclaim Unit Handle Identifier: 0 00:09:49.401 00:09:49.401 FDP test passed 00:09:49.401 00:09:49.401 real 0m0.257s 00:09:49.401 user 0m0.071s 00:09:49.401 sys 0m0.084s 00:09:49.401 ************************************ 00:09:49.401 END TEST nvme_flexible_data_placement 00:09:49.401 ************************************ 00:09:49.401 17:40:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.401 17:40:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:49.401 00:09:49.401 real 0m7.767s 00:09:49.401 user 0m1.172s 00:09:49.401 sys 0m1.440s 00:09:49.401 ************************************ 00:09:49.401 END TEST nvme_fdp 00:09:49.401 17:40:12 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.401 17:40:12 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:49.401 ************************************ 00:09:49.401 17:40:12 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:49.401 17:40:12 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:49.401 17:40:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.401 17:40:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.401 17:40:12 -- common/autotest_common.sh@10 -- # set +x 00:09:49.401 ************************************ 00:09:49.401 START TEST nvme_rpc 00:09:49.401 ************************************ 00:09:49.401 17:40:12 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:49.401 * Looking for test storage... 00:09:49.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:49.401 17:40:12 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:49.401 17:40:12 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:49.401 17:40:12 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.662 17:40:12 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.662 --rc genhtml_branch_coverage=1 00:09:49.662 --rc genhtml_function_coverage=1 00:09:49.662 --rc genhtml_legend=1 00:09:49.662 --rc geninfo_all_blocks=1 00:09:49.662 --rc geninfo_unexecuted_blocks=1 00:09:49.662 00:09:49.662 ' 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.662 --rc genhtml_branch_coverage=1 00:09:49.662 --rc genhtml_function_coverage=1 00:09:49.662 --rc genhtml_legend=1 00:09:49.662 --rc geninfo_all_blocks=1 00:09:49.662 --rc geninfo_unexecuted_blocks=1 00:09:49.662 00:09:49.662 ' 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.662 --rc genhtml_branch_coverage=1 00:09:49.662 --rc genhtml_function_coverage=1 00:09:49.662 --rc genhtml_legend=1 00:09:49.662 --rc geninfo_all_blocks=1 00:09:49.662 --rc geninfo_unexecuted_blocks=1 00:09:49.662 00:09:49.662 ' 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:49.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.662 --rc genhtml_branch_coverage=1 00:09:49.662 --rc genhtml_function_coverage=1 00:09:49.662 --rc genhtml_legend=1 00:09:49.662 --rc geninfo_all_blocks=1 00:09:49.662 --rc geninfo_unexecuted_blocks=1 00:09:49.662 00:09:49.662 ' 00:09:49.662 17:40:12 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.662 17:40:12 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:49.662 17:40:12 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:49.662 17:40:13 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:49.662 17:40:13 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:49.663 17:40:13 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:49.663 17:40:13 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:49.663 17:40:13 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65929 00:09:49.663 17:40:13 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:49.663 17:40:13 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65929 00:09:49.663 17:40:13 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65929 ']' 00:09:49.663 17:40:13 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.663 17:40:13 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.663 17:40:13 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.663 17:40:13 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.663 17:40:13 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.663 17:40:13 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:49.663 [2024-11-20 17:40:13.102358] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:49.663 [2024-11-20 17:40:13.102484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65929 ] 00:09:49.923 [2024-11-20 17:40:13.261649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:49.923 [2024-11-20 17:40:13.371270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.923 [2024-11-20 17:40:13.371358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.495 17:40:13 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.495 17:40:13 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:50.495 17:40:13 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:50.756 Nvme0n1 00:09:50.756 17:40:14 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:50.756 17:40:14 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:51.017 request: 00:09:51.017 { 00:09:51.017 "bdev_name": "Nvme0n1", 00:09:51.017 "filename": "non_existing_file", 00:09:51.017 "method": "bdev_nvme_apply_firmware", 00:09:51.017 "req_id": 1 00:09:51.017 } 00:09:51.017 Got JSON-RPC error response 00:09:51.017 response: 00:09:51.017 { 00:09:51.017 "code": -32603, 00:09:51.017 "message": "open file failed." 00:09:51.017 } 00:09:51.017 17:40:14 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:51.017 17:40:14 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:51.017 17:40:14 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:51.276 17:40:14 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:51.276 17:40:14 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65929 00:09:51.276 17:40:14 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65929 ']' 00:09:51.276 17:40:14 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65929 00:09:51.276 17:40:14 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:51.276 17:40:14 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.276 17:40:14 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65929 00:09:51.276 17:40:14 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.277 killing process with pid 65929 00:09:51.277 17:40:14 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.277 17:40:14 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65929' 00:09:51.277 17:40:14 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65929 00:09:51.277 17:40:14 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65929 00:09:52.659 00:09:52.659 real 0m3.352s 00:09:52.659 user 0m6.389s 00:09:52.659 sys 0m0.518s 00:09:52.659 17:40:16 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.659 ************************************ 00:09:52.659 END TEST nvme_rpc 00:09:52.659 ************************************ 00:09:52.659 17:40:16 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.918 17:40:16 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:52.918 17:40:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.918 17:40:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.918 17:40:16 -- common/autotest_common.sh@10 -- # set +x 00:09:52.918 ************************************ 00:09:52.918 START TEST nvme_rpc_timeouts 00:09:52.918 ************************************ 00:09:52.918 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:52.918 * Looking for test storage... 00:09:52.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:52.918 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.918 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.918 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.918 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.918 17:40:16 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:52.918 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.918 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.918 --rc genhtml_branch_coverage=1 00:09:52.918 --rc genhtml_function_coverage=1 00:09:52.918 --rc genhtml_legend=1 00:09:52.918 --rc geninfo_all_blocks=1 00:09:52.918 --rc geninfo_unexecuted_blocks=1 00:09:52.918 00:09:52.918 ' 00:09:52.918 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.919 --rc genhtml_branch_coverage=1 00:09:52.919 --rc genhtml_function_coverage=1 00:09:52.919 --rc genhtml_legend=1 00:09:52.919 --rc geninfo_all_blocks=1 00:09:52.919 --rc geninfo_unexecuted_blocks=1 00:09:52.919 00:09:52.919 ' 00:09:52.919 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.919 --rc genhtml_branch_coverage=1 00:09:52.919 --rc genhtml_function_coverage=1 00:09:52.919 --rc genhtml_legend=1 00:09:52.919 --rc geninfo_all_blocks=1 00:09:52.919 --rc geninfo_unexecuted_blocks=1 00:09:52.919 00:09:52.919 ' 00:09:52.919 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.919 --rc genhtml_branch_coverage=1 00:09:52.919 --rc genhtml_function_coverage=1 00:09:52.919 --rc genhtml_legend=1 00:09:52.919 --rc geninfo_all_blocks=1 00:09:52.919 --rc geninfo_unexecuted_blocks=1 00:09:52.919 00:09:52.919 ' 00:09:52.919 17:40:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.919 17:40:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65994 00:09:52.919 17:40:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65994 00:09:52.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.919 17:40:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66032 00:09:52.919 17:40:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:52.919 17:40:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66032 00:09:52.919 17:40:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:52.919 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 66032 ']' 00:09:52.919 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.919 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.919 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.919 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.919 17:40:16 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:52.919 [2024-11-20 17:40:16.428710] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:09:52.919 [2024-11-20 17:40:16.428833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66032 ] 00:09:53.179 [2024-11-20 17:40:16.602682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:53.438 [2024-11-20 17:40:16.756819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.438 [2024-11-20 17:40:16.756943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.006 17:40:17 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.006 17:40:17 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:54.006 Checking default timeout settings: 00:09:54.006 17:40:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:54.006 17:40:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:54.267 Making settings changes with rpc: 00:09:54.267 17:40:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:54.267 17:40:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:54.596 Check default vs. modified settings: 00:09:54.596 17:40:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:54.596 17:40:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65994 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65994 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:54.882 Setting action_on_timeout is changed as expected. 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65994 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65994 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:54.882 Setting timeout_us is changed as expected. 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65994 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65994 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:54.882 Setting timeout_admin_us is changed as expected. 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65994 /tmp/settings_modified_65994 00:09:54.882 17:40:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66032 00:09:54.882 17:40:18 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 66032 ']' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 66032 00:09:54.882 17:40:18 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:54.882 17:40:18 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66032 00:09:54.882 17:40:18 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.882 17:40:18 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.882 killing process with pid 66032 00:09:54.882 17:40:18 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66032' 00:09:54.882 17:40:18 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 66032 00:09:54.882 17:40:18 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 66032 00:09:56.798 RPC TIMEOUT SETTING TEST PASSED. 00:09:56.798 17:40:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:56.798 00:09:56.798 real 0m3.606s 00:09:56.798 user 0m6.951s 00:09:56.798 sys 0m0.493s 00:09:56.798 17:40:19 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.798 ************************************ 00:09:56.798 END TEST nvme_rpc_timeouts 00:09:56.798 ************************************ 00:09:56.798 17:40:19 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:56.798 17:40:19 -- spdk/autotest.sh@239 -- # uname -s 00:09:56.798 17:40:19 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:56.798 17:40:19 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:56.798 17:40:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.798 17:40:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.798 17:40:19 -- common/autotest_common.sh@10 -- # set +x 00:09:56.798 ************************************ 00:09:56.798 START TEST sw_hotplug 00:09:56.798 ************************************ 00:09:56.798 17:40:19 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:56.798 * Looking for test storage... 00:09:56.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:56.798 17:40:19 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.798 17:40:19 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.798 17:40:19 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.798 17:40:19 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.798 17:40:19 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:56.798 17:40:19 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.798 17:40:19 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.798 --rc genhtml_branch_coverage=1 00:09:56.798 --rc genhtml_function_coverage=1 00:09:56.798 --rc genhtml_legend=1 00:09:56.798 --rc geninfo_all_blocks=1 00:09:56.798 --rc geninfo_unexecuted_blocks=1 00:09:56.798 00:09:56.798 ' 00:09:56.798 17:40:19 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.798 --rc genhtml_branch_coverage=1 00:09:56.798 --rc genhtml_function_coverage=1 00:09:56.798 --rc genhtml_legend=1 00:09:56.798 --rc geninfo_all_blocks=1 00:09:56.799 --rc geninfo_unexecuted_blocks=1 00:09:56.799 00:09:56.799 ' 00:09:56.799 17:40:19 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.799 --rc genhtml_branch_coverage=1 00:09:56.799 --rc genhtml_function_coverage=1 00:09:56.799 --rc genhtml_legend=1 00:09:56.799 --rc geninfo_all_blocks=1 00:09:56.799 --rc geninfo_unexecuted_blocks=1 00:09:56.799 00:09:56.799 ' 00:09:56.799 17:40:19 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.799 --rc genhtml_branch_coverage=1 00:09:56.799 --rc genhtml_function_coverage=1 00:09:56.799 --rc genhtml_legend=1 00:09:56.799 --rc geninfo_all_blocks=1 00:09:56.799 --rc geninfo_unexecuted_blocks=1 00:09:56.799 00:09:56.799 ' 00:09:56.799 17:40:19 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:56.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.059 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:57.060 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:57.060 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:57.060 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:57.060 17:40:20 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:57.060 17:40:20 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:57.060 17:40:20 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:57.060 17:40:20 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:57.060 17:40:20 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:57.060 17:40:20 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:57.060 17:40:20 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:57.060 17:40:20 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:57.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.583 Waiting for block devices as requested 00:09:57.583 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.583 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.583 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.845 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.132 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:03.132 17:40:26 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:03.132 17:40:26 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:03.132 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:03.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:03.391 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:03.391 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:03.653 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.653 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:03.914 17:40:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66882 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:03.914 17:40:27 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:03.914 17:40:27 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:03.914 17:40:27 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:03.914 17:40:27 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:03.914 17:40:27 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:03.914 17:40:27 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:04.176 Initializing NVMe Controllers 00:10:04.176 Attaching to 0000:00:10.0 00:10:04.176 Attaching to 0000:00:11.0 00:10:04.176 Attached to 0000:00:11.0 00:10:04.176 Attached to 0000:00:10.0 00:10:04.176 Initialization complete. Starting I/O... 00:10:04.176 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:04.176 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:04.176 00:10:05.116 QEMU NVMe Ctrl (12341 ): 2231 I/Os completed (+2231) 00:10:05.116 QEMU NVMe Ctrl (12340 ): 2292 I/Os completed (+2292) 00:10:05.116 00:10:06.054 QEMU NVMe Ctrl (12341 ): 5164 I/Os completed (+2933) 00:10:06.054 QEMU NVMe Ctrl (12340 ): 5283 I/Os completed (+2991) 00:10:06.054 00:10:07.060 QEMU NVMe Ctrl (12341 ): 8291 I/Os completed (+3127) 00:10:07.060 QEMU NVMe Ctrl (12340 ): 8416 I/Os completed (+3133) 00:10:07.060 00:10:07.997 QEMU NVMe Ctrl (12341 ): 11400 I/Os completed (+3109) 00:10:07.997 QEMU NVMe Ctrl (12340 ): 11533 I/Os completed (+3117) 00:10:07.997 00:10:09.378 QEMU NVMe Ctrl (12341 ): 14471 I/Os completed (+3071) 00:10:09.378 QEMU NVMe Ctrl (12340 ): 14652 I/Os completed (+3119) 00:10:09.378 00:10:09.949 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:09.949 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.949 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.949 [2024-11-20 17:40:33.303087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:09.950 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:09.950 [2024-11-20 17:40:33.306608] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.306687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.306730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.306769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:09.950 [2024-11-20 17:40:33.309787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.309858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.309903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.309936] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.950 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.950 [2024-11-20 17:40:33.321392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:09.950 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:09.950 [2024-11-20 17:40:33.322474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.322508] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.322527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.322541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:09.950 [2024-11-20 17:40:33.324229] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.324268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.324284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 [2024-11-20 17:40:33.324297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.950 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/subsystem_vendor 00:10:09.950 EAL: Scan for (pci) bus failed. 00:10:09.950 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:09.950 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:09.950 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.950 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.950 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:09.950 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:10.212 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:10.212 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:10.212 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:10.212 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:10.212 Attaching to 0000:00:10.0 00:10:10.212 Attached to 0000:00:10.0 00:10:10.212 QEMU NVMe Ctrl (12340 ): 54 I/Os completed (+54) 00:10:10.212 00:10:10.212 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:10.212 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:10.212 17:40:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:10.212 Attaching to 0000:00:11.0 00:10:10.212 Attached to 0000:00:11.0 00:10:11.205 QEMU NVMe Ctrl (12340 ): 3148 I/Os completed (+3094) 00:10:11.205 QEMU NVMe Ctrl (12341 ): 2963 I/Os completed (+2963) 00:10:11.205 00:10:12.145 QEMU NVMe Ctrl (12340 ): 6310 I/Os completed (+3162) 00:10:12.145 QEMU NVMe Ctrl (12341 ): 6184 I/Os completed (+3221) 00:10:12.145 00:10:13.086 QEMU NVMe Ctrl (12340 ): 9489 I/Os completed (+3179) 00:10:13.086 QEMU NVMe Ctrl (12341 ): 9391 I/Os completed (+3207) 00:10:13.086 00:10:14.029 QEMU NVMe Ctrl (12340 ): 12570 I/Os completed (+3081) 00:10:14.029 QEMU NVMe Ctrl (12341 ): 12547 I/Os completed (+3156) 00:10:14.029 00:10:15.415 QEMU NVMe Ctrl (12340 ): 15622 I/Os completed (+3052) 00:10:15.415 QEMU NVMe Ctrl (12341 ): 15579 I/Os completed (+3032) 00:10:15.415 00:10:15.983 QEMU NVMe Ctrl (12340 ): 18627 I/Os completed (+3005) 00:10:15.983 QEMU NVMe Ctrl (12341 ): 18687 I/Os completed (+3108) 00:10:15.983 00:10:17.367 QEMU NVMe Ctrl (12340 ): 21783 I/Os completed (+3156) 00:10:17.367 QEMU NVMe Ctrl (12341 ): 21848 I/Os completed (+3161) 00:10:17.367 00:10:18.014 QEMU NVMe Ctrl (12340 ): 24859 I/Os completed (+3076) 00:10:18.014 QEMU NVMe Ctrl (12341 ): 24946 I/Os completed (+3098) 00:10:18.014 00:10:19.399 QEMU NVMe Ctrl (12340 ): 27823 I/Os completed (+2964) 00:10:19.399 QEMU NVMe Ctrl (12341 ): 28013 I/Os completed (+3067) 00:10:19.399 00:10:20.344 QEMU NVMe Ctrl (12340 ): 30740 I/Os completed (+2917) 00:10:20.344 QEMU NVMe Ctrl (12341 ): 30986 I/Os completed (+2973) 00:10:20.344 00:10:21.285 QEMU NVMe Ctrl (12340 ): 33684 I/Os completed (+2944) 00:10:21.285 QEMU NVMe Ctrl (12341 ): 33957 I/Os completed (+2971) 00:10:21.285 00:10:22.226 QEMU NVMe Ctrl (12340 ): 36775 I/Os completed (+3091) 00:10:22.226 QEMU NVMe Ctrl (12341 ): 37078 I/Os completed (+3121) 00:10:22.226 00:10:22.226 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:22.226 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:22.226 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.226 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.226 [2024-11-20 17:40:45.571690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:22.226 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:22.226 [2024-11-20 17:40:45.572933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.572990] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.573013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.573035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:22.226 [2024-11-20 17:40:45.575421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.575478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.575493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.575507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.226 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.226 [2024-11-20 17:40:45.592948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:22.226 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:22.226 [2024-11-20 17:40:45.594032] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.594068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.594090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.594104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:22.226 [2024-11-20 17:40:45.595770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.595806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.595821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 [2024-11-20 17:40:45.595836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.226 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:22.226 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:22.226 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:22.226 EAL: Scan for (pci) bus failed. 00:10:22.226 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.226 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.226 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:22.488 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:22.488 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.488 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.488 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.488 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:22.488 Attaching to 0000:00:10.0 00:10:22.488 Attached to 0000:00:10.0 00:10:22.488 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:22.488 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.488 17:40:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:22.488 Attaching to 0000:00:11.0 00:10:22.488 Attached to 0000:00:11.0 00:10:23.060 QEMU NVMe Ctrl (12340 ): 2220 I/Os completed (+2220) 00:10:23.060 QEMU NVMe Ctrl (12341 ): 1967 I/Os completed (+1967) 00:10:23.060 00:10:24.001 QEMU NVMe Ctrl (12340 ): 5380 I/Os completed (+3160) 00:10:24.001 QEMU NVMe Ctrl (12341 ): 5126 I/Os completed (+3159) 00:10:24.001 00:10:25.017 QEMU NVMe Ctrl (12340 ): 8358 I/Os completed (+2978) 00:10:25.017 QEMU NVMe Ctrl (12341 ): 8111 I/Os completed (+2985) 00:10:25.017 00:10:26.403 QEMU NVMe Ctrl (12340 ): 11604 I/Os completed (+3246) 00:10:26.403 QEMU NVMe Ctrl (12341 ): 11358 I/Os completed (+3247) 00:10:26.403 00:10:27.342 QEMU NVMe Ctrl (12340 ): 14639 I/Os completed (+3035) 00:10:27.342 QEMU NVMe Ctrl (12341 ): 14400 I/Os completed (+3042) 00:10:27.342 00:10:28.305 QEMU NVMe Ctrl (12340 ): 18246 I/Os completed (+3607) 00:10:28.305 QEMU NVMe Ctrl (12341 ): 18015 I/Os completed (+3615) 00:10:28.305 00:10:29.248 QEMU NVMe Ctrl (12340 ): 21237 I/Os completed (+2991) 00:10:29.248 QEMU NVMe Ctrl (12341 ): 21075 I/Os completed (+3060) 00:10:29.248 00:10:30.193 QEMU NVMe Ctrl (12340 ): 24234 I/Os completed (+2997) 00:10:30.193 QEMU NVMe Ctrl (12341 ): 24052 I/Os completed (+2977) 00:10:30.193 00:10:31.138 QEMU NVMe Ctrl (12340 ): 27088 I/Os completed (+2854) 00:10:31.138 QEMU NVMe Ctrl (12341 ): 26936 I/Os completed (+2884) 00:10:31.138 00:10:32.079 QEMU NVMe Ctrl (12340 ): 29977 I/Os completed (+2889) 00:10:32.079 QEMU NVMe Ctrl (12341 ): 29884 I/Os completed (+2948) 00:10:32.079 00:10:33.018 QEMU NVMe Ctrl (12340 ): 32940 I/Os completed (+2963) 00:10:33.018 QEMU NVMe Ctrl (12341 ): 32874 I/Os completed (+2990) 00:10:33.018 00:10:34.397 QEMU NVMe Ctrl (12340 ): 35802 I/Os completed (+2862) 00:10:34.397 QEMU NVMe Ctrl (12341 ): 35765 I/Os completed (+2891) 00:10:34.397 00:10:34.397 17:40:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:34.397 17:40:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:34.397 17:40:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.397 17:40:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.397 [2024-11-20 17:40:57.872202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:34.397 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:34.397 [2024-11-20 17:40:57.873360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.873401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.873419] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.873435] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:34.397 [2024-11-20 17:40:57.875317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.875362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.875376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.875390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 17:40:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.397 17:40:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.397 [2024-11-20 17:40:57.905672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:34.397 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:34.397 [2024-11-20 17:40:57.906778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.906827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.906848] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.906864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:34.397 [2024-11-20 17:40:57.908567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.908605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.908622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 [2024-11-20 17:40:57.908635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.397 17:40:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:34.397 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:34.397 17:40:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:34.397 EAL: Scan for (pci) bus failed. 00:10:34.658 17:40:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.658 17:40:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.658 17:40:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:34.658 17:40:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:34.658 17:40:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.658 17:40:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.658 17:40:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.658 17:40:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:34.658 Attaching to 0000:00:10.0 00:10:34.658 Attached to 0000:00:10.0 00:10:34.922 17:40:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:34.922 17:40:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.922 17:40:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:34.922 Attaching to 0000:00:11.0 00:10:34.922 Attached to 0000:00:11.0 00:10:34.922 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:34.922 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:34.922 [2024-11-20 17:40:58.256274] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:47.149 17:41:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:47.149 17:41:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.149 17:41:10 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.95 00:10:47.149 17:41:10 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.95 00:10:47.149 17:41:10 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:47.149 17:41:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.95 00:10:47.149 17:41:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.95 2 00:10:47.149 remove_attach_helper took 42.95s to complete (handling 2 nvme drive(s)) 17:41:10 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:53.738 17:41:16 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66882 00:10:53.738 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66882) - No such process 00:10:53.738 17:41:16 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66882 00:10:53.738 17:41:16 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:53.738 17:41:16 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:53.738 17:41:16 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:53.738 17:41:16 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67434 00:10:53.738 17:41:16 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:53.738 17:41:16 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67434 00:10:53.738 17:41:16 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67434 ']' 00:10:53.738 17:41:16 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.738 17:41:16 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.738 17:41:16 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.738 17:41:16 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.738 17:41:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.738 17:41:16 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:53.738 [2024-11-20 17:41:16.338351] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:10:53.738 [2024-11-20 17:41:16.338535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67434 ] 00:10:53.738 [2024-11-20 17:41:16.496988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.738 [2024-11-20 17:41:16.598353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.738 17:41:17 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.738 17:41:17 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:53.738 17:41:17 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:53.738 17:41:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.738 17:41:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.738 17:41:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.738 17:41:17 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:53.738 17:41:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:53.738 17:41:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:53.738 17:41:17 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:53.738 17:41:17 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:53.738 17:41:17 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:53.738 17:41:17 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:53.738 17:41:17 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:53.738 17:41:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:53.738 17:41:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:53.738 17:41:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:53.738 17:41:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:53.738 17:41:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.333 17:41:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.333 17:41:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.333 17:41:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:00.333 [2024-11-20 17:41:23.303165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:00.333 [2024-11-20 17:41:23.304632] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.333 [2024-11-20 17:41:23.304677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.333 [2024-11-20 17:41:23.304694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.333 [2024-11-20 17:41:23.304718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.333 [2024-11-20 17:41:23.304731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.333 [2024-11-20 17:41:23.304745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.333 [2024-11-20 17:41:23.304754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.333 [2024-11-20 17:41:23.304762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.333 [2024-11-20 17:41:23.304769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.333 [2024-11-20 17:41:23.304781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.333 [2024-11-20 17:41:23.304788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.333 [2024-11-20 17:41:23.304796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.333 17:41:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.333 17:41:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.333 [2024-11-20 17:41:23.803157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:00.333 17:41:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.333 [2024-11-20 17:41:23.804532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.333 [2024-11-20 17:41:23.804564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.333 [2024-11-20 17:41:23.804577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.333 [2024-11-20 17:41:23.804593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.333 [2024-11-20 17:41:23.804602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.333 [2024-11-20 17:41:23.804609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.333 [2024-11-20 17:41:23.804618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.333 [2024-11-20 17:41:23.804624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.333 [2024-11-20 17:41:23.804632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.333 [2024-11-20 17:41:23.804640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.333 [2024-11-20 17:41:23.804648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.333 [2024-11-20 17:41:23.804655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:00.333 17:41:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:00.899 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:00.899 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.899 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.899 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.899 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.899 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.899 17:41:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.899 17:41:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.899 17:41:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.899 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:00.899 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:00.899 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.899 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.899 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:01.156 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:01.156 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.156 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.156 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.156 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:01.156 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:01.156 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.156 17:41:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.375 17:41:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.375 17:41:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.375 17:41:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.375 17:41:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.375 17:41:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.375 17:41:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:13.375 17:41:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:13.375 [2024-11-20 17:41:36.703354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:13.375 [2024-11-20 17:41:36.704683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.375 [2024-11-20 17:41:36.704719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.375 [2024-11-20 17:41:36.704731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.375 [2024-11-20 17:41:36.704747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.375 [2024-11-20 17:41:36.704755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.375 [2024-11-20 17:41:36.704764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.375 [2024-11-20 17:41:36.704771] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.375 [2024-11-20 17:41:36.704779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.375 [2024-11-20 17:41:36.704786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.375 [2024-11-20 17:41:36.704795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.375 [2024-11-20 17:41:36.704802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.375 [2024-11-20 17:41:36.704810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.937 [2024-11-20 17:41:37.203352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.937 [2024-11-20 17:41:37.204641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.937 [2024-11-20 17:41:37.204667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.937 [2024-11-20 17:41:37.204681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.937 [2024-11-20 17:41:37.204696] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.937 [2024-11-20 17:41:37.204705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.937 [2024-11-20 17:41:37.204713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.937 [2024-11-20 17:41:37.204722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.937 [2024-11-20 17:41:37.204728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.937 [2024-11-20 17:41:37.204736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.937 [2024-11-20 17:41:37.204744] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.937 [2024-11-20 17:41:37.204752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.937 [2024-11-20 17:41:37.204758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.937 17:41:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.937 17:41:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.937 17:41:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:13.937 17:41:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.124 17:41:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.124 17:41:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.124 17:41:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.124 [2024-11-20 17:41:49.503533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:26.124 [2024-11-20 17:41:49.505140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.124 [2024-11-20 17:41:49.505251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.124 [2024-11-20 17:41:49.505317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.124 [2024-11-20 17:41:49.505354] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.124 [2024-11-20 17:41:49.505446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.124 [2024-11-20 17:41:49.505497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.124 [2024-11-20 17:41:49.505524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.124 [2024-11-20 17:41:49.505542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.124 [2024-11-20 17:41:49.505566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.124 [2024-11-20 17:41:49.505593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.124 [2024-11-20 17:41:49.505611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.124 [2024-11-20 17:41:49.505677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.124 17:41:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.124 17:41:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.124 17:41:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:26.124 17:41:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:26.381 [2024-11-20 17:41:49.903541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:26.381 [2024-11-20 17:41:49.905023] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.381 [2024-11-20 17:41:49.905122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.381 [2024-11-20 17:41:49.905186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.381 [2024-11-20 17:41:49.905242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.381 [2024-11-20 17:41:49.905263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.381 [2024-11-20 17:41:49.905347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.381 [2024-11-20 17:41:49.905379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.381 [2024-11-20 17:41:49.905404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.381 [2024-11-20 17:41:49.905433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.381 [2024-11-20 17:41:49.905457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.381 [2024-11-20 17:41:49.905474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.381 [2024-11-20 17:41:49.905529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.639 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:26.639 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.639 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.639 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.639 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.639 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.639 17:41:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.639 17:41:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.639 17:41:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.639 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:26.639 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:26.639 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.639 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.897 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:26.897 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:26.897 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.897 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.897 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.897 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:26.897 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:26.897 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.897 17:41:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.16 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.16 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:11:39.091 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:39.091 17:42:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:39.091 17:42:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:45.648 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:45.648 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:45.648 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:45.648 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:45.648 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:45.648 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:45.648 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.648 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.648 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.648 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.648 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.648 17:42:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.649 17:42:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.649 17:42:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.649 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:45.649 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:45.649 [2024-11-20 17:42:08.495533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:45.649 [2024-11-20 17:42:08.496843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.649 [2024-11-20 17:42:08.496885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.649 [2024-11-20 17:42:08.496897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.649 [2024-11-20 17:42:08.496915] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.649 [2024-11-20 17:42:08.496923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.649 [2024-11-20 17:42:08.496932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.649 [2024-11-20 17:42:08.496939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.649 [2024-11-20 17:42:08.496948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.649 [2024-11-20 17:42:08.496955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.649 [2024-11-20 17:42:08.496964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.649 [2024-11-20 17:42:08.496971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.649 [2024-11-20 17:42:08.496981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.649 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:45.649 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.649 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.649 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.649 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.649 17:42:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.649 17:42:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.649 17:42:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.649 [2024-11-20 17:42:08.995534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:45.649 17:42:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.649 [2024-11-20 17:42:08.996989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.649 [2024-11-20 17:42:08.997022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.649 [2024-11-20 17:42:08.997035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.649 [2024-11-20 17:42:08.997050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.649 [2024-11-20 17:42:08.997058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.649 [2024-11-20 17:42:08.997066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.649 [2024-11-20 17:42:08.997075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.649 [2024-11-20 17:42:08.997082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.649 [2024-11-20 17:42:08.997091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.649 [2024-11-20 17:42:08.997098] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.649 [2024-11-20 17:42:08.997106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.649 [2024-11-20 17:42:08.997113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.649 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:45.649 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.214 17:42:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.214 17:42:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.214 17:42:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:46.214 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:46.471 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.471 17:42:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.719 17:42:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.719 17:42:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.719 17:42:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.719 17:42:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.719 17:42:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.719 17:42:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:58.719 17:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:58.719 [2024-11-20 17:42:21.895742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:58.719 [2024-11-20 17:42:21.896893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.719 [2024-11-20 17:42:21.896933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.719 [2024-11-20 17:42:21.896944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.719 [2024-11-20 17:42:21.896961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.719 [2024-11-20 17:42:21.896969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.719 [2024-11-20 17:42:21.896978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.719 [2024-11-20 17:42:21.896985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.719 [2024-11-20 17:42:21.896994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.719 [2024-11-20 17:42:21.897000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.719 [2024-11-20 17:42:21.897009] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.719 [2024-11-20 17:42:21.897015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.719 [2024-11-20 17:42:21.897023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.976 [2024-11-20 17:42:22.295758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:58.976 [2024-11-20 17:42:22.297083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.976 [2024-11-20 17:42:22.297116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.976 [2024-11-20 17:42:22.297129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.976 [2024-11-20 17:42:22.297144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.976 [2024-11-20 17:42:22.297156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.976 [2024-11-20 17:42:22.297163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.976 [2024-11-20 17:42:22.297172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.976 [2024-11-20 17:42:22.297179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.976 [2024-11-20 17:42:22.297187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.976 [2024-11-20 17:42:22.297194] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.976 [2024-11-20 17:42:22.297202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.976 [2024-11-20 17:42:22.297209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.976 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:58.976 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.976 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.976 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.976 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.976 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.976 17:42:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.976 17:42:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.976 17:42:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.976 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:58.976 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:58.976 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.977 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.977 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:59.234 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:59.234 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.234 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.234 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.234 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:59.234 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:59.234 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.234 17:42:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.446 17:42:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.446 17:42:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.446 17:42:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.446 17:42:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.446 17:42:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.446 [2024-11-20 17:42:34.695929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:11.446 [2024-11-20 17:42:34.696984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.446 [2024-11-20 17:42:34.697107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.446 [2024-11-20 17:42:34.697124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.446 [2024-11-20 17:42:34.697141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.446 [2024-11-20 17:42:34.697149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.446 [2024-11-20 17:42:34.697157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.446 [2024-11-20 17:42:34.697165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.446 [2024-11-20 17:42:34.697175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.446 [2024-11-20 17:42:34.697183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.446 [2024-11-20 17:42:34.697192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.446 [2024-11-20 17:42:34.697199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.446 [2024-11-20 17:42:34.697208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.446 17:42:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:11.446 17:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:11.704 [2024-11-20 17:42:35.095936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:11.704 [2024-11-20 17:42:35.097122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.704 [2024-11-20 17:42:35.097154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.704 [2024-11-20 17:42:35.097166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.704 [2024-11-20 17:42:35.097180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.704 [2024-11-20 17:42:35.097189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.704 [2024-11-20 17:42:35.097196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.704 [2024-11-20 17:42:35.097205] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.704 [2024-11-20 17:42:35.097212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.704 [2024-11-20 17:42:35.097221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.704 [2024-11-20 17:42:35.097228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.704 [2024-11-20 17:42:35.097239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.704 [2024-11-20 17:42:35.097245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.704 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:11.704 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:11.704 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:11.704 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.704 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.704 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.704 17:42:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.704 17:42:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.704 17:42:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.962 17:42:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:24.168 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:24.168 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:24.168 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:24.168 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.168 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.168 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.168 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:24.168 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.10 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.10 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:24.168 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.10 00:12:24.168 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.10 2 00:12:24.168 remove_attach_helper took 45.10s to complete (handling 2 nvme drive(s)) 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:24.168 17:42:47 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67434 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67434 ']' 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67434 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67434 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67434' 00:12:24.168 killing process with pid 67434 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67434 00:12:24.168 17:42:47 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67434 00:12:25.548 17:42:48 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:25.548 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:26.118 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:26.119 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:26.119 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:26.119 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:26.119 00:12:26.119 real 2m29.712s 00:12:26.119 user 1m50.982s 00:12:26.119 sys 0m17.443s 00:12:26.119 17:42:49 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.119 17:42:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.119 ************************************ 00:12:26.119 END TEST sw_hotplug 00:12:26.119 ************************************ 00:12:26.119 17:42:49 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:26.119 17:42:49 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:26.119 17:42:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.119 17:42:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.119 17:42:49 -- common/autotest_common.sh@10 -- # set +x 00:12:26.119 ************************************ 00:12:26.119 START TEST nvme_xnvme 00:12:26.119 ************************************ 00:12:26.119 17:42:49 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:26.379 * Looking for test storage... 00:12:26.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:26.379 17:42:49 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:26.379 17:42:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:26.379 17:42:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:26.379 17:42:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.379 17:42:49 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:26.379 17:42:49 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.379 17:42:49 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:26.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.379 --rc genhtml_branch_coverage=1 00:12:26.379 --rc genhtml_function_coverage=1 00:12:26.379 --rc genhtml_legend=1 00:12:26.379 --rc geninfo_all_blocks=1 00:12:26.379 --rc geninfo_unexecuted_blocks=1 00:12:26.379 00:12:26.379 ' 00:12:26.379 17:42:49 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:26.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.379 --rc genhtml_branch_coverage=1 00:12:26.379 --rc genhtml_function_coverage=1 00:12:26.379 --rc genhtml_legend=1 00:12:26.379 --rc geninfo_all_blocks=1 00:12:26.379 --rc geninfo_unexecuted_blocks=1 00:12:26.379 00:12:26.379 ' 00:12:26.379 17:42:49 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:26.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.379 --rc genhtml_branch_coverage=1 00:12:26.379 --rc genhtml_function_coverage=1 00:12:26.379 --rc genhtml_legend=1 00:12:26.379 --rc geninfo_all_blocks=1 00:12:26.379 --rc geninfo_unexecuted_blocks=1 00:12:26.379 00:12:26.380 ' 00:12:26.380 17:42:49 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:26.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.380 --rc genhtml_branch_coverage=1 00:12:26.380 --rc genhtml_function_coverage=1 00:12:26.380 --rc genhtml_legend=1 00:12:26.380 --rc geninfo_all_blocks=1 00:12:26.380 --rc geninfo_unexecuted_blocks=1 00:12:26.380 00:12:26.380 ' 00:12:26.380 17:42:49 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:26.380 17:42:49 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:26.380 17:42:49 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:26.380 17:42:49 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:26.380 17:42:49 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:26.380 17:42:49 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:26.380 17:42:49 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:26.380 17:42:49 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:26.380 17:42:49 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:26.380 17:42:49 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:26.380 17:42:49 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:26.380 17:42:49 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:26.380 17:42:49 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:26.380 #define SPDK_CONFIG_H 00:12:26.380 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:26.380 #define SPDK_CONFIG_APPS 1 00:12:26.380 #define SPDK_CONFIG_ARCH native 00:12:26.380 #define SPDK_CONFIG_ASAN 1 00:12:26.380 #undef SPDK_CONFIG_AVAHI 00:12:26.380 #undef SPDK_CONFIG_CET 00:12:26.380 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:26.380 #define SPDK_CONFIG_COVERAGE 1 00:12:26.380 #define SPDK_CONFIG_CROSS_PREFIX 00:12:26.380 #undef SPDK_CONFIG_CRYPTO 00:12:26.380 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:26.380 #undef SPDK_CONFIG_CUSTOMOCF 00:12:26.380 #undef SPDK_CONFIG_DAOS 00:12:26.380 #define SPDK_CONFIG_DAOS_DIR 00:12:26.381 #define SPDK_CONFIG_DEBUG 1 00:12:26.381 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:26.381 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:26.381 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:26.381 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:26.381 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:26.381 #undef SPDK_CONFIG_DPDK_UADK 00:12:26.381 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:26.381 #define SPDK_CONFIG_EXAMPLES 1 00:12:26.381 #undef SPDK_CONFIG_FC 00:12:26.381 #define SPDK_CONFIG_FC_PATH 00:12:26.381 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:26.381 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:26.381 #define SPDK_CONFIG_FSDEV 1 00:12:26.381 #undef SPDK_CONFIG_FUSE 00:12:26.381 #undef SPDK_CONFIG_FUZZER 00:12:26.381 #define SPDK_CONFIG_FUZZER_LIB 00:12:26.381 #undef SPDK_CONFIG_GOLANG 00:12:26.381 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:26.381 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:26.381 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:26.381 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:26.381 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:26.381 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:26.381 #undef SPDK_CONFIG_HAVE_LZ4 00:12:26.381 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:26.381 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:26.381 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:26.381 #define SPDK_CONFIG_IDXD 1 00:12:26.381 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:26.381 #undef SPDK_CONFIG_IPSEC_MB 00:12:26.381 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:26.381 #define SPDK_CONFIG_ISAL 1 00:12:26.381 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:26.381 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:26.381 #define SPDK_CONFIG_LIBDIR 00:12:26.381 #undef SPDK_CONFIG_LTO 00:12:26.381 #define SPDK_CONFIG_MAX_LCORES 128 00:12:26.381 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:26.381 #define SPDK_CONFIG_NVME_CUSE 1 00:12:26.381 #undef SPDK_CONFIG_OCF 00:12:26.381 #define SPDK_CONFIG_OCF_PATH 00:12:26.381 #define SPDK_CONFIG_OPENSSL_PATH 00:12:26.381 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:26.381 #define SPDK_CONFIG_PGO_DIR 00:12:26.381 #undef SPDK_CONFIG_PGO_USE 00:12:26.381 #define SPDK_CONFIG_PREFIX /usr/local 00:12:26.381 #undef SPDK_CONFIG_RAID5F 00:12:26.381 #undef SPDK_CONFIG_RBD 00:12:26.381 #define SPDK_CONFIG_RDMA 1 00:12:26.381 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:26.381 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:26.381 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:26.381 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:26.381 #define SPDK_CONFIG_SHARED 1 00:12:26.381 #undef SPDK_CONFIG_SMA 00:12:26.381 #define SPDK_CONFIG_TESTS 1 00:12:26.381 #undef SPDK_CONFIG_TSAN 00:12:26.381 #define SPDK_CONFIG_UBLK 1 00:12:26.381 #define SPDK_CONFIG_UBSAN 1 00:12:26.381 #undef SPDK_CONFIG_UNIT_TESTS 00:12:26.381 #undef SPDK_CONFIG_URING 00:12:26.381 #define SPDK_CONFIG_URING_PATH 00:12:26.381 #undef SPDK_CONFIG_URING_ZNS 00:12:26.381 #undef SPDK_CONFIG_USDT 00:12:26.381 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:26.381 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:26.381 #undef SPDK_CONFIG_VFIO_USER 00:12:26.381 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:26.381 #define SPDK_CONFIG_VHOST 1 00:12:26.381 #define SPDK_CONFIG_VIRTIO 1 00:12:26.381 #undef SPDK_CONFIG_VTUNE 00:12:26.381 #define SPDK_CONFIG_VTUNE_DIR 00:12:26.381 #define SPDK_CONFIG_WERROR 1 00:12:26.381 #define SPDK_CONFIG_WPDK_DIR 00:12:26.381 #define SPDK_CONFIG_XNVME 1 00:12:26.381 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:26.381 17:42:49 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:26.381 17:42:49 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.381 17:42:49 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.381 17:42:49 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.381 17:42:49 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.381 17:42:49 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.381 17:42:49 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.381 17:42:49 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.381 17:42:49 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:26.381 17:42:49 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:26.381 17:42:49 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:26.381 17:42:49 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:26.382 17:42:49 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68792 ]] 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68792 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.uUmIX9 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.uUmIX9/tests/xnvme /tmp/spdk.uUmIX9 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974818816 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593231360 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974818816 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593231360 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=92316590080 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=7386189824 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:26.383 * Looking for test storage... 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974818816 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:26.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:26.383 17:42:49 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:26.384 17:42:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:26.641 17:42:49 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:26.641 17:42:49 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.641 17:42:49 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.641 17:42:49 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.641 17:42:49 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.641 17:42:49 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.641 17:42:49 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:26.642 17:42:49 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.642 17:42:49 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:26.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.642 --rc genhtml_branch_coverage=1 00:12:26.642 --rc genhtml_function_coverage=1 00:12:26.642 --rc genhtml_legend=1 00:12:26.642 --rc geninfo_all_blocks=1 00:12:26.642 --rc geninfo_unexecuted_blocks=1 00:12:26.642 00:12:26.642 ' 00:12:26.642 17:42:49 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:26.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.642 --rc genhtml_branch_coverage=1 00:12:26.642 --rc genhtml_function_coverage=1 00:12:26.642 --rc genhtml_legend=1 00:12:26.642 --rc geninfo_all_blocks=1 00:12:26.642 --rc geninfo_unexecuted_blocks=1 00:12:26.642 00:12:26.642 ' 00:12:26.642 17:42:49 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:26.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.642 --rc genhtml_branch_coverage=1 00:12:26.642 --rc genhtml_function_coverage=1 00:12:26.642 --rc genhtml_legend=1 00:12:26.642 --rc geninfo_all_blocks=1 00:12:26.642 --rc geninfo_unexecuted_blocks=1 00:12:26.642 00:12:26.642 ' 00:12:26.642 17:42:49 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:26.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.642 --rc genhtml_branch_coverage=1 00:12:26.642 --rc genhtml_function_coverage=1 00:12:26.642 --rc genhtml_legend=1 00:12:26.642 --rc geninfo_all_blocks=1 00:12:26.642 --rc geninfo_unexecuted_blocks=1 00:12:26.642 00:12:26.642 ' 00:12:26.642 17:42:49 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.642 17:42:49 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.642 17:42:49 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.642 17:42:49 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.642 17:42:49 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.642 17:42:49 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:26.642 17:42:49 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:26.642 17:42:49 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:26.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:26.900 Waiting for block devices as requested 00:12:26.900 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:27.158 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:27.158 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:27.158 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:32.422 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:32.422 17:42:55 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:32.680 17:42:55 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:32.680 17:42:55 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:32.680 17:42:56 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:32.680 17:42:56 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:32.680 17:42:56 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:32.680 17:42:56 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:32.680 17:42:56 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:32.937 No valid GPT data, bailing 00:12:32.937 17:42:56 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:32.937 17:42:56 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:32.937 17:42:56 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:32.937 17:42:56 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:32.937 17:42:56 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:32.937 17:42:56 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:32.937 17:42:56 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:32.937 17:42:56 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:32.938 17:42:56 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:32.938 17:42:56 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:32.938 17:42:56 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:32.938 17:42:56 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:32.938 17:42:56 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:32.938 17:42:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:32.938 17:42:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:32.938 17:42:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:32.938 17:42:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:32.938 17:42:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:32.938 17:42:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.938 17:42:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.938 ************************************ 00:12:32.938 START TEST xnvme_rpc 00:12:32.938 ************************************ 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69177 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69177 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69177 ']' 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.938 17:42:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.938 [2024-11-20 17:42:56.346601] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:12:32.938 [2024-11-20 17:42:56.346845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69177 ] 00:12:33.195 [2024-11-20 17:42:56.508226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.195 [2024-11-20 17:42:56.629428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.761 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.761 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:33.761 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:33.761 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.761 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.761 xnvme_bdev 00:12:33.761 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.761 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69177 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69177 ']' 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69177 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69177 00:12:34.019 killing process with pid 69177 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69177' 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69177 00:12:34.019 17:42:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69177 00:12:35.916 00:12:35.916 real 0m2.783s 00:12:35.916 user 0m2.887s 00:12:35.916 sys 0m0.342s 00:12:35.916 ************************************ 00:12:35.916 END TEST xnvme_rpc 00:12:35.916 ************************************ 00:12:35.916 17:42:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.916 17:42:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.916 17:42:59 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:35.916 17:42:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:35.916 17:42:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.916 17:42:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.916 ************************************ 00:12:35.916 START TEST xnvme_bdevperf 00:12:35.916 ************************************ 00:12:35.916 17:42:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:35.916 17:42:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:35.916 17:42:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:35.916 17:42:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:35.916 17:42:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:35.916 17:42:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:35.916 17:42:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:35.916 17:42:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:35.916 { 00:12:35.916 "subsystems": [ 00:12:35.916 { 00:12:35.916 "subsystem": "bdev", 00:12:35.916 "config": [ 00:12:35.916 { 00:12:35.916 "params": { 00:12:35.916 "io_mechanism": "libaio", 00:12:35.916 "conserve_cpu": false, 00:12:35.916 "filename": "/dev/nvme0n1", 00:12:35.916 "name": "xnvme_bdev" 00:12:35.916 }, 00:12:35.916 "method": "bdev_xnvme_create" 00:12:35.916 }, 00:12:35.916 { 00:12:35.916 "method": "bdev_wait_for_examine" 00:12:35.916 } 00:12:35.916 ] 00:12:35.916 } 00:12:35.916 ] 00:12:35.916 } 00:12:35.916 [2024-11-20 17:42:59.186478] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:12:35.916 [2024-11-20 17:42:59.186590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69251 ] 00:12:35.916 [2024-11-20 17:42:59.345196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.916 [2024-11-20 17:42:59.449197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.483 Running I/O for 5 seconds... 00:12:38.348 27651.00 IOPS, 108.01 MiB/s [2024-11-20T17:43:02.825Z] 27866.50 IOPS, 108.85 MiB/s [2024-11-20T17:43:03.787Z] 26872.00 IOPS, 104.97 MiB/s [2024-11-20T17:43:04.734Z] 26931.25 IOPS, 105.20 MiB/s [2024-11-20T17:43:04.993Z] 27144.80 IOPS, 106.03 MiB/s 00:12:41.453 Latency(us) 00:12:41.453 [2024-11-20T17:43:04.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.453 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:41.453 xnvme_bdev : 5.02 27048.62 105.66 0.00 0.00 2353.85 371.79 150833.62 00:12:41.453 [2024-11-20T17:43:04.993Z] =================================================================================================================== 00:12:41.453 [2024-11-20T17:43:04.993Z] Total : 27048.62 105.66 0.00 0.00 2353.85 371.79 150833.62 00:12:42.018 17:43:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:42.018 17:43:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:42.018 17:43:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:42.018 17:43:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:42.018 17:43:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:42.018 { 00:12:42.018 "subsystems": [ 00:12:42.018 { 00:12:42.018 "subsystem": "bdev", 00:12:42.018 "config": [ 00:12:42.018 { 00:12:42.018 "params": { 00:12:42.018 "io_mechanism": "libaio", 00:12:42.018 "conserve_cpu": false, 00:12:42.018 "filename": "/dev/nvme0n1", 00:12:42.018 "name": "xnvme_bdev" 00:12:42.018 }, 00:12:42.018 "method": "bdev_xnvme_create" 00:12:42.018 }, 00:12:42.018 { 00:12:42.018 "method": "bdev_wait_for_examine" 00:12:42.018 } 00:12:42.019 ] 00:12:42.019 } 00:12:42.019 ] 00:12:42.019 } 00:12:42.019 [2024-11-20 17:43:05.541446] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:12:42.019 [2024-11-20 17:43:05.541559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69326 ] 00:12:42.277 [2024-11-20 17:43:05.702413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.535 [2024-11-20 17:43:05.821049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.793 Running I/O for 5 seconds... 00:12:44.690 35630.00 IOPS, 139.18 MiB/s [2024-11-20T17:43:09.165Z] 35101.00 IOPS, 137.11 MiB/s [2024-11-20T17:43:10.118Z] 34922.67 IOPS, 136.42 MiB/s [2024-11-20T17:43:11.492Z] 35013.25 IOPS, 136.77 MiB/s 00:12:47.952 Latency(us) 00:12:47.952 [2024-11-20T17:43:11.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.952 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:47.952 xnvme_bdev : 5.00 35292.20 137.86 0.00 0.00 1808.54 220.55 8973.39 00:12:47.952 [2024-11-20T17:43:11.492Z] =================================================================================================================== 00:12:47.952 [2024-11-20T17:43:11.492Z] Total : 35292.20 137.86 0.00 0.00 1808.54 220.55 8973.39 00:12:48.519 00:12:48.519 real 0m12.709s 00:12:48.519 user 0m4.804s 00:12:48.519 sys 0m6.286s 00:12:48.519 17:43:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.519 17:43:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:48.519 ************************************ 00:12:48.519 END TEST xnvme_bdevperf 00:12:48.519 ************************************ 00:12:48.519 17:43:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:48.519 17:43:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:48.519 17:43:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.519 17:43:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.519 ************************************ 00:12:48.519 START TEST xnvme_fio_plugin 00:12:48.519 ************************************ 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:48.519 17:43:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.519 { 00:12:48.519 "subsystems": [ 00:12:48.519 { 00:12:48.519 "subsystem": "bdev", 00:12:48.519 "config": [ 00:12:48.519 { 00:12:48.519 "params": { 00:12:48.520 "io_mechanism": "libaio", 00:12:48.520 "conserve_cpu": false, 00:12:48.520 "filename": "/dev/nvme0n1", 00:12:48.520 "name": "xnvme_bdev" 00:12:48.520 }, 00:12:48.520 "method": "bdev_xnvme_create" 00:12:48.520 }, 00:12:48.520 { 00:12:48.520 "method": "bdev_wait_for_examine" 00:12:48.520 } 00:12:48.520 ] 00:12:48.520 } 00:12:48.520 ] 00:12:48.520 } 00:12:48.778 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:48.778 fio-3.35 00:12:48.778 Starting 1 thread 00:12:55.342 00:12:55.342 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69440: Wed Nov 20 17:43:17 2024 00:12:55.342 read: IOPS=33.8k, BW=132MiB/s (138MB/s)(660MiB/5001msec) 00:12:55.342 slat (usec): min=4, max=1818, avg=21.67, stdev=94.61 00:12:55.342 clat (usec): min=105, max=7720, avg=1303.09, stdev=508.51 00:12:55.342 lat (usec): min=166, max=7724, avg=1324.77, stdev=498.50 00:12:55.342 clat percentiles (usec): 00:12:55.342 | 1.00th=[ 249], 5.00th=[ 529], 10.00th=[ 652], 20.00th=[ 873], 00:12:55.342 | 30.00th=[ 1037], 40.00th=[ 1172], 50.00th=[ 1303], 60.00th=[ 1418], 00:12:55.342 | 70.00th=[ 1549], 80.00th=[ 1696], 90.00th=[ 1926], 95.00th=[ 2147], 00:12:55.342 | 99.00th=[ 2769], 99.50th=[ 2999], 99.90th=[ 3490], 99.95th=[ 3687], 00:12:55.342 | 99.99th=[ 4178] 00:12:55.342 bw ( KiB/s): min=130168, max=141752, per=99.78%, avg=134925.33, stdev=3534.93, samples=9 00:12:55.342 iops : min=32542, max=35438, avg=33731.33, stdev=883.73, samples=9 00:12:55.342 lat (usec) : 250=1.02%, 500=3.32%, 750=10.49%, 1000=12.51% 00:12:55.342 lat (msec) : 2=64.90%, 4=7.75%, 10=0.01% 00:12:55.342 cpu : usr=40.64%, sys=51.80%, ctx=48, majf=0, minf=764 00:12:55.342 IO depths : 1=0.4%, 2=1.1%, 4=3.0%, 8=8.2%, 16=23.1%, 32=62.1%, >=64=2.1% 00:12:55.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.342 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:55.342 issued rwts: total=169068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.342 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:55.342 00:12:55.342 Run status group 0 (all jobs): 00:12:55.342 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=660MiB (693MB), run=5001-5001msec 00:12:55.342 ----------------------------------------------------- 00:12:55.342 Suppressions used: 00:12:55.342 count bytes template 00:12:55.342 1 11 /usr/src/fio/parse.c 00:12:55.342 1 8 libtcmalloc_minimal.so 00:12:55.342 1 904 libcrypto.so 00:12:55.342 ----------------------------------------------------- 00:12:55.342 00:12:55.342 17:43:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:55.342 17:43:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.342 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.342 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:55.342 17:43:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:55.343 17:43:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.343 { 00:12:55.343 "subsystems": [ 00:12:55.343 { 00:12:55.343 "subsystem": "bdev", 00:12:55.343 "config": [ 00:12:55.343 { 00:12:55.343 "params": { 00:12:55.343 "io_mechanism": "libaio", 00:12:55.343 "conserve_cpu": false, 00:12:55.343 "filename": "/dev/nvme0n1", 00:12:55.343 "name": "xnvme_bdev" 00:12:55.343 }, 00:12:55.343 "method": "bdev_xnvme_create" 00:12:55.343 }, 00:12:55.343 { 00:12:55.343 "method": "bdev_wait_for_examine" 00:12:55.343 } 00:12:55.343 ] 00:12:55.343 } 00:12:55.343 ] 00:12:55.343 } 00:12:55.343 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:55.343 fio-3.35 00:12:55.343 Starting 1 thread 00:13:01.900 00:13:01.900 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69532: Wed Nov 20 17:43:24 2024 00:13:01.900 write: IOPS=33.8k, BW=132MiB/s (138MB/s)(659MiB/5001msec); 0 zone resets 00:13:01.900 slat (usec): min=4, max=1580, avg=23.43, stdev=92.14 00:13:01.900 clat (usec): min=106, max=4717, avg=1252.73, stdev=534.33 00:13:01.900 lat (usec): min=161, max=4811, avg=1276.16, stdev=525.86 00:13:01.900 clat percentiles (usec): 00:13:01.900 | 1.00th=[ 237], 5.00th=[ 461], 10.00th=[ 594], 20.00th=[ 766], 00:13:01.900 | 30.00th=[ 947], 40.00th=[ 1090], 50.00th=[ 1237], 60.00th=[ 1369], 00:13:01.900 | 70.00th=[ 1500], 80.00th=[ 1663], 90.00th=[ 1926], 95.00th=[ 2147], 00:13:01.900 | 99.00th=[ 2769], 99.50th=[ 3032], 99.90th=[ 3589], 99.95th=[ 3851], 00:13:01.900 | 99.99th=[ 4228] 00:13:01.900 bw ( KiB/s): min=127296, max=145816, per=99.85%, avg=134814.22, stdev=5384.86, samples=9 00:13:01.900 iops : min=31824, max=36454, avg=33703.56, stdev=1346.21, samples=9 00:13:01.900 lat (usec) : 250=1.21%, 500=4.75%, 750=13.19%, 1000=14.15% 00:13:01.900 lat (msec) : 2=58.87%, 4=7.81%, 10=0.03% 00:13:01.900 cpu : usr=35.86%, sys=56.42%, ctx=33, majf=0, minf=766 00:13:01.900 IO depths : 1=0.3%, 2=1.0%, 4=2.9%, 8=8.4%, 16=23.3%, 32=62.1%, >=64=2.0% 00:13:01.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.900 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:01.900 issued rwts: total=0,168803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.900 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:01.900 00:13:01.900 Run status group 0 (all jobs): 00:13:01.900 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=659MiB (691MB), run=5001-5001msec 00:13:01.900 ----------------------------------------------------- 00:13:01.900 Suppressions used: 00:13:01.900 count bytes template 00:13:01.900 1 11 /usr/src/fio/parse.c 00:13:01.900 1 8 libtcmalloc_minimal.so 00:13:01.900 1 904 libcrypto.so 00:13:01.900 ----------------------------------------------------- 00:13:01.900 00:13:02.158 00:13:02.158 real 0m13.541s 00:13:02.158 user 0m6.472s 00:13:02.158 sys 0m5.920s 00:13:02.158 17:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.158 ************************************ 00:13:02.158 END TEST xnvme_fio_plugin 00:13:02.158 ************************************ 00:13:02.158 17:43:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:02.158 17:43:25 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:02.158 17:43:25 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:02.158 17:43:25 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:02.158 17:43:25 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:02.158 17:43:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:02.158 17:43:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.158 17:43:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.158 ************************************ 00:13:02.158 START TEST xnvme_rpc 00:13:02.158 ************************************ 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:02.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69618 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69618 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69618 ']' 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.158 17:43:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.158 [2024-11-20 17:43:25.589180] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:02.158 [2024-11-20 17:43:25.589297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69618 ] 00:13:02.418 [2024-11-20 17:43:25.748209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.418 [2024-11-20 17:43:25.848484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.983 xnvme_bdev 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.983 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69618 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69618 ']' 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69618 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69618 00:13:03.241 killing process with pid 69618 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69618' 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69618 00:13:03.241 17:43:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69618 00:13:04.615 ************************************ 00:13:04.615 END TEST xnvme_rpc 00:13:04.615 ************************************ 00:13:04.615 00:13:04.615 real 0m2.637s 00:13:04.615 user 0m2.709s 00:13:04.615 sys 0m0.361s 00:13:04.615 17:43:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.615 17:43:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.873 17:43:28 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:04.873 17:43:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:04.873 17:43:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.873 17:43:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.873 ************************************ 00:13:04.873 START TEST xnvme_bdevperf 00:13:04.873 ************************************ 00:13:04.873 17:43:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:04.873 17:43:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:04.873 17:43:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:04.873 17:43:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:04.873 17:43:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:04.873 17:43:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:04.873 17:43:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:04.873 17:43:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:04.873 { 00:13:04.873 "subsystems": [ 00:13:04.873 { 00:13:04.873 "subsystem": "bdev", 00:13:04.873 "config": [ 00:13:04.873 { 00:13:04.873 "params": { 00:13:04.873 "io_mechanism": "libaio", 00:13:04.873 "conserve_cpu": true, 00:13:04.873 "filename": "/dev/nvme0n1", 00:13:04.873 "name": "xnvme_bdev" 00:13:04.873 }, 00:13:04.873 "method": "bdev_xnvme_create" 00:13:04.873 }, 00:13:04.873 { 00:13:04.873 "method": "bdev_wait_for_examine" 00:13:04.873 } 00:13:04.873 ] 00:13:04.873 } 00:13:04.873 ] 00:13:04.873 } 00:13:04.873 [2024-11-20 17:43:28.276738] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:04.873 [2024-11-20 17:43:28.276852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69687 ] 00:13:05.131 [2024-11-20 17:43:28.437146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.131 [2024-11-20 17:43:28.539353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.389 Running I/O for 5 seconds... 00:13:07.694 34391.00 IOPS, 134.34 MiB/s [2024-11-20T17:43:32.168Z] 32752.00 IOPS, 127.94 MiB/s [2024-11-20T17:43:33.103Z] 31654.00 IOPS, 123.65 MiB/s [2024-11-20T17:43:34.038Z] 31659.25 IOPS, 123.67 MiB/s [2024-11-20T17:43:34.038Z] 32040.00 IOPS, 125.16 MiB/s 00:13:10.498 Latency(us) 00:13:10.498 [2024-11-20T17:43:34.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.498 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:10.498 xnvme_bdev : 5.01 31994.97 124.98 0.00 0.00 1995.42 469.46 7813.91 00:13:10.498 [2024-11-20T17:43:34.038Z] =================================================================================================================== 00:13:10.498 [2024-11-20T17:43:34.038Z] Total : 31994.97 124.98 0.00 0.00 1995.42 469.46 7813.91 00:13:11.068 17:43:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:11.068 17:43:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:11.068 17:43:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:11.068 17:43:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:11.068 17:43:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:11.068 { 00:13:11.068 "subsystems": [ 00:13:11.068 { 00:13:11.068 "subsystem": "bdev", 00:13:11.068 "config": [ 00:13:11.068 { 00:13:11.068 "params": { 00:13:11.068 "io_mechanism": "libaio", 00:13:11.068 "conserve_cpu": true, 00:13:11.068 "filename": "/dev/nvme0n1", 00:13:11.068 "name": "xnvme_bdev" 00:13:11.068 }, 00:13:11.068 "method": "bdev_xnvme_create" 00:13:11.068 }, 00:13:11.068 { 00:13:11.068 "method": "bdev_wait_for_examine" 00:13:11.068 } 00:13:11.068 ] 00:13:11.068 } 00:13:11.068 ] 00:13:11.068 } 00:13:11.068 [2024-11-20 17:43:34.603662] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:11.068 [2024-11-20 17:43:34.603777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69762 ] 00:13:11.327 [2024-11-20 17:43:34.763539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.327 [2024-11-20 17:43:34.860764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.895 Running I/O for 5 seconds... 00:13:13.774 34829.00 IOPS, 136.05 MiB/s [2024-11-20T17:43:38.250Z] 34554.50 IOPS, 134.98 MiB/s [2024-11-20T17:43:39.188Z] 34910.33 IOPS, 136.37 MiB/s [2024-11-20T17:43:40.148Z] 26961.50 IOPS, 105.32 MiB/s [2024-11-20T17:43:40.148Z] 22188.80 IOPS, 86.67 MiB/s 00:13:16.608 Latency(us) 00:13:16.608 [2024-11-20T17:43:40.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.608 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:16.608 xnvme_bdev : 5.02 22132.12 86.45 0.00 0.00 2882.84 63.41 45371.08 00:13:16.608 [2024-11-20T17:43:40.148Z] =================================================================================================================== 00:13:16.608 [2024-11-20T17:43:40.148Z] Total : 22132.12 86.45 0.00 0.00 2882.84 63.41 45371.08 00:13:17.542 00:13:17.542 real 0m12.666s 00:13:17.542 user 0m6.166s 00:13:17.542 sys 0m5.135s 00:13:17.542 17:43:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.542 17:43:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:17.542 ************************************ 00:13:17.542 END TEST xnvme_bdevperf 00:13:17.542 ************************************ 00:13:17.542 17:43:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:17.542 17:43:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:17.542 17:43:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.542 17:43:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.542 ************************************ 00:13:17.542 START TEST xnvme_fio_plugin 00:13:17.542 ************************************ 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:17.542 17:43:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.542 { 00:13:17.542 "subsystems": [ 00:13:17.542 { 00:13:17.542 "subsystem": "bdev", 00:13:17.542 "config": [ 00:13:17.542 { 00:13:17.542 "params": { 00:13:17.542 "io_mechanism": "libaio", 00:13:17.542 "conserve_cpu": true, 00:13:17.542 "filename": "/dev/nvme0n1", 00:13:17.542 "name": "xnvme_bdev" 00:13:17.542 }, 00:13:17.542 "method": "bdev_xnvme_create" 00:13:17.542 }, 00:13:17.542 { 00:13:17.542 "method": "bdev_wait_for_examine" 00:13:17.542 } 00:13:17.542 ] 00:13:17.542 } 00:13:17.542 ] 00:13:17.542 } 00:13:17.801 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:17.801 fio-3.35 00:13:17.801 Starting 1 thread 00:13:24.355 00:13:24.355 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69881: Wed Nov 20 17:43:46 2024 00:13:24.355 read: IOPS=36.7k, BW=143MiB/s (150MB/s)(717MiB/5002msec) 00:13:24.355 slat (usec): min=4, max=2009, avg=20.49, stdev=82.63 00:13:24.355 clat (usec): min=87, max=9452, avg=1197.05, stdev=555.55 00:13:24.355 lat (usec): min=148, max=9458, avg=1217.54, stdev=551.26 00:13:24.356 clat percentiles (usec): 00:13:24.356 | 1.00th=[ 239], 5.00th=[ 437], 10.00th=[ 578], 20.00th=[ 742], 00:13:24.356 | 30.00th=[ 889], 40.00th=[ 1020], 50.00th=[ 1139], 60.00th=[ 1254], 00:13:24.356 | 70.00th=[ 1385], 80.00th=[ 1565], 90.00th=[ 1876], 95.00th=[ 2212], 00:13:24.356 | 99.00th=[ 3032], 99.50th=[ 3294], 99.90th=[ 4047], 99.95th=[ 4293], 00:13:24.356 | 99.99th=[ 6390] 00:13:24.356 bw ( KiB/s): min=137936, max=150224, per=99.08%, avg=145392.00, stdev=4823.50, samples=9 00:13:24.356 iops : min=34484, max=37556, avg=36348.00, stdev=1205.88, samples=9 00:13:24.356 lat (usec) : 100=0.01%, 250=1.20%, 500=5.52%, 750=13.64%, 1000=18.03% 00:13:24.356 lat (msec) : 2=54.09%, 4=7.40%, 10=0.11% 00:13:24.356 cpu : usr=38.89%, sys=52.41%, ctx=11, majf=0, minf=764 00:13:24.356 IO depths : 1=0.4%, 2=1.0%, 4=2.9%, 8=8.4%, 16=23.6%, 32=61.6%, >=64=2.1% 00:13:24.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.356 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:24.356 issued rwts: total=183501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.356 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:24.356 00:13:24.356 Run status group 0 (all jobs): 00:13:24.356 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=717MiB (752MB), run=5002-5002msec 00:13:24.356 ----------------------------------------------------- 00:13:24.356 Suppressions used: 00:13:24.356 count bytes template 00:13:24.356 1 11 /usr/src/fio/parse.c 00:13:24.356 1 8 libtcmalloc_minimal.so 00:13:24.356 1 904 libcrypto.so 00:13:24.356 ----------------------------------------------------- 00:13:24.356 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:24.356 17:43:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:24.356 { 00:13:24.356 "subsystems": [ 00:13:24.356 { 00:13:24.356 "subsystem": "bdev", 00:13:24.356 "config": [ 00:13:24.356 { 00:13:24.356 "params": { 00:13:24.356 "io_mechanism": "libaio", 00:13:24.356 "conserve_cpu": true, 00:13:24.356 "filename": "/dev/nvme0n1", 00:13:24.356 "name": "xnvme_bdev" 00:13:24.356 }, 00:13:24.356 "method": "bdev_xnvme_create" 00:13:24.356 }, 00:13:24.356 { 00:13:24.356 "method": "bdev_wait_for_examine" 00:13:24.356 } 00:13:24.356 ] 00:13:24.356 } 00:13:24.356 ] 00:13:24.356 } 00:13:24.614 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:24.615 fio-3.35 00:13:24.615 Starting 1 thread 00:13:31.178 00:13:31.178 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69967: Wed Nov 20 17:43:53 2024 00:13:31.178 write: IOPS=35.9k, BW=140MiB/s (147MB/s)(701MiB/5002msec); 0 zone resets 00:13:31.178 slat (usec): min=4, max=1772, avg=21.61, stdev=84.11 00:13:31.178 clat (usec): min=17, max=8874, avg=1208.18, stdev=597.15 00:13:31.178 lat (usec): min=59, max=8886, avg=1229.80, stdev=592.65 00:13:31.178 clat percentiles (usec): 00:13:31.178 | 1.00th=[ 237], 5.00th=[ 420], 10.00th=[ 562], 20.00th=[ 734], 00:13:31.178 | 30.00th=[ 881], 40.00th=[ 1020], 50.00th=[ 1156], 60.00th=[ 1287], 00:13:31.178 | 70.00th=[ 1418], 80.00th=[ 1598], 90.00th=[ 1893], 95.00th=[ 2180], 00:13:31.178 | 99.00th=[ 2999], 99.50th=[ 3523], 99.90th=[ 5997], 99.95th=[ 6980], 00:13:31.178 | 99.99th=[ 8291] 00:13:31.178 bw ( KiB/s): min=132688, max=156048, per=100.00%, avg=144230.22, stdev=8827.61, samples=9 00:13:31.178 iops : min=33172, max=39012, avg=36057.56, stdev=2206.90, samples=9 00:13:31.178 lat (usec) : 20=0.01%, 50=0.01%, 100=0.01%, 250=1.23%, 500=6.11% 00:13:31.178 lat (usec) : 750=13.98%, 1000=17.25% 00:13:31.178 lat (msec) : 2=53.71%, 4=7.39%, 10=0.30% 00:13:31.178 cpu : usr=37.19%, sys=54.09%, ctx=13, majf=0, minf=766 00:13:31.178 IO depths : 1=0.3%, 2=0.9%, 4=2.9%, 8=8.6%, 16=23.9%, 32=61.2%, >=64=2.1% 00:13:31.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.178 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:31.178 issued rwts: total=0,179358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.178 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:31.178 00:13:31.178 Run status group 0 (all jobs): 00:13:31.178 WRITE: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=701MiB (735MB), run=5002-5002msec 00:13:31.178 ----------------------------------------------------- 00:13:31.178 Suppressions used: 00:13:31.178 count bytes template 00:13:31.178 1 11 /usr/src/fio/parse.c 00:13:31.178 1 8 libtcmalloc_minimal.so 00:13:31.178 1 904 libcrypto.so 00:13:31.178 ----------------------------------------------------- 00:13:31.178 00:13:31.178 00:13:31.178 real 0m13.587s 00:13:31.178 user 0m6.486s 00:13:31.178 sys 0m5.835s 00:13:31.178 17:43:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.178 ************************************ 00:13:31.178 END TEST xnvme_fio_plugin 00:13:31.178 ************************************ 00:13:31.178 17:43:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:31.178 17:43:54 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:31.178 17:43:54 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:31.178 17:43:54 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:31.178 17:43:54 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:31.178 17:43:54 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:31.178 17:43:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:31.178 17:43:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:31.178 17:43:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:31.178 17:43:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:31.178 17:43:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:31.178 17:43:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.178 17:43:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.178 ************************************ 00:13:31.178 START TEST xnvme_rpc 00:13:31.178 ************************************ 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70052 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70052 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70052 ']' 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:31.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.178 17:43:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.178 [2024-11-20 17:43:54.683307] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:31.178 [2024-11-20 17:43:54.683415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70052 ] 00:13:31.436 [2024-11-20 17:43:54.842497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.436 [2024-11-20 17:43:54.942729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.003 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.003 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:32.003 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:32.003 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.003 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.261 xnvme_bdev 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70052 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70052 ']' 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70052 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70052 00:13:32.261 killing process with pid 70052 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70052' 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70052 00:13:32.261 17:43:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70052 00:13:33.700 00:13:33.700 real 0m2.604s 00:13:33.700 user 0m2.703s 00:13:33.700 sys 0m0.338s 00:13:33.700 17:43:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.700 ************************************ 00:13:33.700 END TEST xnvme_rpc 00:13:33.700 ************************************ 00:13:33.700 17:43:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.958 17:43:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:33.958 17:43:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:33.958 17:43:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.958 17:43:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.958 ************************************ 00:13:33.958 START TEST xnvme_bdevperf 00:13:33.958 ************************************ 00:13:33.958 17:43:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:33.958 17:43:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:33.958 17:43:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:33.958 17:43:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:33.958 17:43:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:33.958 17:43:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:33.958 17:43:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:33.958 17:43:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:33.958 { 00:13:33.958 "subsystems": [ 00:13:33.958 { 00:13:33.958 "subsystem": "bdev", 00:13:33.958 "config": [ 00:13:33.958 { 00:13:33.958 "params": { 00:13:33.958 "io_mechanism": "io_uring", 00:13:33.958 "conserve_cpu": false, 00:13:33.958 "filename": "/dev/nvme0n1", 00:13:33.958 "name": "xnvme_bdev" 00:13:33.958 }, 00:13:33.958 "method": "bdev_xnvme_create" 00:13:33.958 }, 00:13:33.958 { 00:13:33.958 "method": "bdev_wait_for_examine" 00:13:33.958 } 00:13:33.958 ] 00:13:33.958 } 00:13:33.958 ] 00:13:33.958 } 00:13:33.958 [2024-11-20 17:43:57.340399] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:33.958 [2024-11-20 17:43:57.340512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70122 ] 00:13:33.958 [2024-11-20 17:43:57.496819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.216 [2024-11-20 17:43:57.596300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.474 Running I/O for 5 seconds... 00:13:36.361 40265.00 IOPS, 157.29 MiB/s [2024-11-20T17:44:00.896Z] 39428.50 IOPS, 154.02 MiB/s [2024-11-20T17:44:02.269Z] 39507.00 IOPS, 154.32 MiB/s [2024-11-20T17:44:03.202Z] 38811.75 IOPS, 151.61 MiB/s [2024-11-20T17:44:03.202Z] 38743.80 IOPS, 151.34 MiB/s 00:13:39.662 Latency(us) 00:13:39.662 [2024-11-20T17:44:03.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.662 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:39.662 xnvme_bdev : 5.00 38734.73 151.31 0.00 0.00 1648.46 80.74 15526.99 00:13:39.662 [2024-11-20T17:44:03.202Z] =================================================================================================================== 00:13:39.662 [2024-11-20T17:44:03.202Z] Total : 38734.73 151.31 0.00 0.00 1648.46 80.74 15526.99 00:13:40.228 17:44:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:40.228 17:44:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:40.228 17:44:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:40.228 17:44:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:40.228 17:44:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:40.228 { 00:13:40.228 "subsystems": [ 00:13:40.228 { 00:13:40.228 "subsystem": "bdev", 00:13:40.228 "config": [ 00:13:40.228 { 00:13:40.228 "params": { 00:13:40.228 "io_mechanism": "io_uring", 00:13:40.228 "conserve_cpu": false, 00:13:40.228 "filename": "/dev/nvme0n1", 00:13:40.228 "name": "xnvme_bdev" 00:13:40.228 }, 00:13:40.228 "method": "bdev_xnvme_create" 00:13:40.228 }, 00:13:40.228 { 00:13:40.228 "method": "bdev_wait_for_examine" 00:13:40.228 } 00:13:40.228 ] 00:13:40.228 } 00:13:40.228 ] 00:13:40.228 } 00:13:40.228 [2024-11-20 17:44:03.636309] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:13:40.228 [2024-11-20 17:44:03.636576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70197 ] 00:13:40.486 [2024-11-20 17:44:03.796461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.486 [2024-11-20 17:44:03.894435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.744 Running I/O for 5 seconds... 00:13:42.611 5488.00 IOPS, 21.44 MiB/s [2024-11-20T17:44:07.146Z] 5876.00 IOPS, 22.95 MiB/s [2024-11-20T17:44:08.518Z] 5978.00 IOPS, 23.35 MiB/s [2024-11-20T17:44:09.452Z] 6143.25 IOPS, 24.00 MiB/s [2024-11-20T17:44:09.452Z] 6262.20 IOPS, 24.46 MiB/s 00:13:45.912 Latency(us) 00:13:45.912 [2024-11-20T17:44:09.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.912 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:45.912 xnvme_bdev : 5.01 6260.71 24.46 0.00 0.00 10209.24 39.19 34683.67 00:13:45.912 [2024-11-20T17:44:09.452Z] =================================================================================================================== 00:13:45.912 [2024-11-20T17:44:09.452Z] Total : 6260.71 24.46 0.00 0.00 10209.24 39.19 34683.67 00:13:46.479 00:13:46.479 real 0m12.596s 00:13:46.479 user 0m5.808s 00:13:46.479 sys 0m6.548s 00:13:46.479 ************************************ 00:13:46.479 END TEST xnvme_bdevperf 00:13:46.479 ************************************ 00:13:46.479 17:44:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.479 17:44:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:46.479 17:44:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:46.479 17:44:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:46.479 17:44:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.479 17:44:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.479 ************************************ 00:13:46.479 START TEST xnvme_fio_plugin 00:13:46.479 ************************************ 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:46.479 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:46.480 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:46.480 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:46.480 17:44:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:46.480 { 00:13:46.480 "subsystems": [ 00:13:46.480 { 00:13:46.480 "subsystem": "bdev", 00:13:46.480 "config": [ 00:13:46.480 { 00:13:46.480 "params": { 00:13:46.480 "io_mechanism": "io_uring", 00:13:46.480 "conserve_cpu": false, 00:13:46.480 "filename": "/dev/nvme0n1", 00:13:46.480 "name": "xnvme_bdev" 00:13:46.480 }, 00:13:46.480 "method": "bdev_xnvme_create" 00:13:46.480 }, 00:13:46.480 { 00:13:46.480 "method": "bdev_wait_for_examine" 00:13:46.480 } 00:13:46.480 ] 00:13:46.480 } 00:13:46.480 ] 00:13:46.480 } 00:13:46.738 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:46.738 fio-3.35 00:13:46.738 Starting 1 thread 00:13:53.360 00:13:53.360 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70311: Wed Nov 20 17:44:15 2024 00:13:53.360 read: IOPS=45.0k, BW=176MiB/s (184MB/s)(878MiB/5002msec) 00:13:53.360 slat (nsec): min=2778, max=75705, avg=3121.12, stdev=1355.31 00:13:53.360 clat (usec): min=228, max=9223, avg=1303.67, stdev=326.20 00:13:53.360 lat (usec): min=240, max=9226, avg=1306.79, stdev=326.26 00:13:53.360 clat percentiles (usec): 00:13:53.360 | 1.00th=[ 791], 5.00th=[ 906], 10.00th=[ 971], 20.00th=[ 1074], 00:13:53.360 | 30.00th=[ 1139], 40.00th=[ 1205], 50.00th=[ 1254], 60.00th=[ 1336], 00:13:53.360 | 70.00th=[ 1434], 80.00th=[ 1516], 90.00th=[ 1631], 95.00th=[ 1745], 00:13:53.360 | 99.00th=[ 2409], 99.50th=[ 2737], 99.90th=[ 3982], 99.95th=[ 5014], 00:13:53.360 | 99.99th=[ 7046] 00:13:53.360 bw ( KiB/s): min=151552, max=199680, per=100.00%, avg=180930.67, stdev=15807.53, samples=9 00:13:53.360 iops : min=37888, max=49920, avg=45232.67, stdev=3951.88, samples=9 00:13:53.360 lat (usec) : 250=0.01%, 500=0.08%, 750=0.39%, 1000=12.23% 00:13:53.360 lat (msec) : 2=85.41%, 4=1.78%, 10=0.10% 00:13:53.360 cpu : usr=35.25%, sys=63.73%, ctx=12, majf=0, minf=762 00:13:53.360 IO depths : 1=1.3%, 2=2.7%, 4=5.5%, 8=11.7%, 16=24.9%, 32=52.2%, >=64=1.8% 00:13:53.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.360 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:13:53.360 issued rwts: total=224864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.360 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:53.360 00:13:53.360 Run status group 0 (all jobs): 00:13:53.360 READ: bw=176MiB/s (184MB/s), 176MiB/s-176MiB/s (184MB/s-184MB/s), io=878MiB (921MB), run=5002-5002msec 00:13:53.360 ----------------------------------------------------- 00:13:53.360 Suppressions used: 00:13:53.360 count bytes template 00:13:53.360 1 11 /usr/src/fio/parse.c 00:13:53.360 1 8 libtcmalloc_minimal.so 00:13:53.360 1 904 libcrypto.so 00:13:53.360 ----------------------------------------------------- 00:13:53.360 00:13:53.360 17:44:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:53.360 17:44:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.360 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.360 17:44:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:53.360 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:53.360 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:53.360 17:44:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:53.360 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:53.360 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:53.360 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:53.360 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:53.361 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:53.361 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:53.361 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:53.361 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:53.361 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:53.361 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:53.361 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:53.361 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:53.361 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:53.361 17:44:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.361 { 00:13:53.361 "subsystems": [ 00:13:53.361 { 00:13:53.361 "subsystem": "bdev", 00:13:53.361 "config": [ 00:13:53.361 { 00:13:53.361 "params": { 00:13:53.361 "io_mechanism": "io_uring", 00:13:53.361 "conserve_cpu": false, 00:13:53.361 "filename": "/dev/nvme0n1", 00:13:53.361 "name": "xnvme_bdev" 00:13:53.361 }, 00:13:53.361 "method": "bdev_xnvme_create" 00:13:53.361 }, 00:13:53.361 { 00:13:53.361 "method": "bdev_wait_for_examine" 00:13:53.361 } 00:13:53.361 ] 00:13:53.361 } 00:13:53.361 ] 00:13:53.361 } 00:13:53.361 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:53.361 fio-3.35 00:13:53.361 Starting 1 thread 00:13:59.925 00:13:59.925 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70403: Wed Nov 20 17:44:22 2024 00:13:59.925 write: IOPS=44.2k, BW=173MiB/s (181MB/s)(865MiB/5002msec); 0 zone resets 00:13:59.925 slat (nsec): min=2828, max=99810, avg=3921.75, stdev=1921.98 00:13:59.925 clat (usec): min=59, max=7033, avg=1299.02, stdev=481.88 00:13:59.925 lat (usec): min=62, max=7036, avg=1302.94, stdev=482.00 00:13:59.925 clat percentiles (usec): 00:13:59.925 | 1.00th=[ 515], 5.00th=[ 848], 10.00th=[ 930], 20.00th=[ 1029], 00:13:59.925 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1221], 60.00th=[ 1287], 00:13:59.925 | 70.00th=[ 1369], 80.00th=[ 1483], 90.00th=[ 1631], 95.00th=[ 1893], 00:13:59.925 | 99.00th=[ 3654], 99.50th=[ 4293], 99.90th=[ 5145], 99.95th=[ 5473], 00:13:59.925 | 99.99th=[ 6194] 00:13:59.925 bw ( KiB/s): min=145376, max=191488, per=100.00%, avg=178545.78, stdev=14632.85, samples=9 00:13:59.925 iops : min=36344, max=47872, avg=44636.44, stdev=3658.21, samples=9 00:13:59.925 lat (usec) : 100=0.02%, 250=0.20%, 500=0.72%, 750=1.63%, 1000=14.26% 00:13:59.925 lat (msec) : 2=78.96%, 4=3.56%, 10=0.65% 00:13:59.925 cpu : usr=35.25%, sys=63.55%, ctx=44, majf=0, minf=764 00:13:59.925 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.3%, 16=23.6%, 32=53.6%, >=64=2.0% 00:13:59.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.925 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:59.925 issued rwts: total=0,221316,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.925 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:59.925 00:13:59.925 Run status group 0 (all jobs): 00:13:59.925 WRITE: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=865MiB (907MB), run=5002-5002msec 00:13:59.925 ----------------------------------------------------- 00:13:59.925 Suppressions used: 00:13:59.925 count bytes template 00:13:59.925 1 11 /usr/src/fio/parse.c 00:13:59.925 1 8 libtcmalloc_minimal.so 00:13:59.925 1 904 libcrypto.so 00:13:59.925 ----------------------------------------------------- 00:13:59.925 00:13:59.925 ************************************ 00:13:59.925 END TEST xnvme_fio_plugin 00:13:59.925 00:13:59.925 real 0m13.466s 00:13:59.925 user 0m6.150s 00:13:59.925 sys 0m6.882s 00:13:59.925 17:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.925 17:44:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:59.925 ************************************ 00:13:59.925 17:44:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:59.925 17:44:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:59.925 17:44:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:59.925 17:44:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:59.925 17:44:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:59.925 17:44:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.925 17:44:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.183 ************************************ 00:14:00.183 START TEST xnvme_rpc 00:14:00.183 ************************************ 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:00.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70489 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70489 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70489 ']' 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.183 17:44:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:00.183 [2024-11-20 17:44:23.546175] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:14:00.183 [2024-11-20 17:44:23.546294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70489 ] 00:14:00.183 [2024-11-20 17:44:23.707161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.441 [2024-11-20 17:44:23.825541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.011 xnvme_bdev 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:01.011 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.275 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:01.275 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:01.275 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:01.275 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:01.275 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.275 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70489 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70489 ']' 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70489 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70489 00:14:01.276 killing process with pid 70489 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70489' 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70489 00:14:01.276 17:44:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70489 00:14:02.647 ************************************ 00:14:02.647 END TEST xnvme_rpc 00:14:02.647 ************************************ 00:14:02.647 00:14:02.647 real 0m2.717s 00:14:02.647 user 0m2.790s 00:14:02.647 sys 0m0.407s 00:14:02.647 17:44:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.647 17:44:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.906 17:44:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:02.906 17:44:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:02.906 17:44:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.906 17:44:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:02.906 ************************************ 00:14:02.906 START TEST xnvme_bdevperf 00:14:02.906 ************************************ 00:14:02.906 17:44:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:02.906 17:44:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:02.906 17:44:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:02.906 17:44:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:02.906 17:44:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:02.906 17:44:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:02.906 17:44:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:02.906 17:44:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:02.906 { 00:14:02.906 "subsystems": [ 00:14:02.906 { 00:14:02.906 "subsystem": "bdev", 00:14:02.906 "config": [ 00:14:02.906 { 00:14:02.906 "params": { 00:14:02.906 "io_mechanism": "io_uring", 00:14:02.906 "conserve_cpu": true, 00:14:02.906 "filename": "/dev/nvme0n1", 00:14:02.906 "name": "xnvme_bdev" 00:14:02.906 }, 00:14:02.906 "method": "bdev_xnvme_create" 00:14:02.906 }, 00:14:02.906 { 00:14:02.906 "method": "bdev_wait_for_examine" 00:14:02.906 } 00:14:02.906 ] 00:14:02.906 } 00:14:02.906 ] 00:14:02.906 } 00:14:02.906 [2024-11-20 17:44:26.313505] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:14:02.906 [2024-11-20 17:44:26.313632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70552 ] 00:14:03.165 [2024-11-20 17:44:26.475672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.165 [2024-11-20 17:44:26.590922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.423 Running I/O for 5 seconds... 00:14:05.730 43132.00 IOPS, 168.48 MiB/s [2024-11-20T17:44:30.201Z] 45962.00 IOPS, 179.54 MiB/s [2024-11-20T17:44:31.134Z] 45098.33 IOPS, 176.17 MiB/s [2024-11-20T17:44:32.068Z] 45117.25 IOPS, 176.24 MiB/s [2024-11-20T17:44:32.068Z] 44623.80 IOPS, 174.31 MiB/s 00:14:08.528 Latency(us) 00:14:08.528 [2024-11-20T17:44:32.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.528 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:08.528 xnvme_bdev : 5.00 44595.09 174.20 0.00 0.00 1430.72 340.28 13611.32 00:14:08.528 [2024-11-20T17:44:32.068Z] =================================================================================================================== 00:14:08.528 [2024-11-20T17:44:32.068Z] Total : 44595.09 174.20 0.00 0.00 1430.72 340.28 13611.32 00:14:09.095 17:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:09.095 17:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:09.095 17:44:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:09.095 17:44:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:09.095 17:44:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:09.095 { 00:14:09.095 "subsystems": [ 00:14:09.095 { 00:14:09.095 "subsystem": "bdev", 00:14:09.095 "config": [ 00:14:09.095 { 00:14:09.095 "params": { 00:14:09.095 "io_mechanism": "io_uring", 00:14:09.095 "conserve_cpu": true, 00:14:09.095 "filename": "/dev/nvme0n1", 00:14:09.095 "name": "xnvme_bdev" 00:14:09.095 }, 00:14:09.095 "method": "bdev_xnvme_create" 00:14:09.095 }, 00:14:09.095 { 00:14:09.095 "method": "bdev_wait_for_examine" 00:14:09.095 } 00:14:09.095 ] 00:14:09.095 } 00:14:09.095 ] 00:14:09.095 } 00:14:09.354 [2024-11-20 17:44:32.640427] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:14:09.354 [2024-11-20 17:44:32.640589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70633 ] 00:14:09.354 [2024-11-20 17:44:32.802302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.612 [2024-11-20 17:44:32.902029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.612 Running I/O for 5 seconds... 00:14:11.919 14067.00 IOPS, 54.95 MiB/s [2024-11-20T17:44:36.432Z] 14627.00 IOPS, 57.14 MiB/s [2024-11-20T17:44:37.370Z] 15401.00 IOPS, 60.16 MiB/s [2024-11-20T17:44:38.303Z] 14690.50 IOPS, 57.38 MiB/s [2024-11-20T17:44:38.303Z] 14177.40 IOPS, 55.38 MiB/s 00:14:14.763 Latency(us) 00:14:14.763 [2024-11-20T17:44:38.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.763 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:14.763 xnvme_bdev : 5.01 14168.28 55.34 0.00 0.00 4509.20 51.99 39523.25 00:14:14.763 [2024-11-20T17:44:38.303Z] =================================================================================================================== 00:14:14.763 [2024-11-20T17:44:38.303Z] Total : 14168.28 55.34 0.00 0.00 4509.20 51.99 39523.25 00:14:15.700 00:14:15.700 real 0m12.679s 00:14:15.700 user 0m9.299s 00:14:15.700 sys 0m2.385s 00:14:15.700 ************************************ 00:14:15.700 END TEST xnvme_bdevperf 00:14:15.700 ************************************ 00:14:15.700 17:44:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.700 17:44:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:15.700 17:44:38 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:15.700 17:44:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:15.700 17:44:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.700 17:44:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:15.700 ************************************ 00:14:15.700 START TEST xnvme_fio_plugin 00:14:15.700 ************************************ 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:15.700 17:44:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:15.700 17:44:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:15.700 17:44:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:15.700 17:44:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:15.700 17:44:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:15.700 17:44:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.700 { 00:14:15.700 "subsystems": [ 00:14:15.700 { 00:14:15.700 "subsystem": "bdev", 00:14:15.700 "config": [ 00:14:15.700 { 00:14:15.700 "params": { 00:14:15.700 "io_mechanism": "io_uring", 00:14:15.700 "conserve_cpu": true, 00:14:15.700 "filename": "/dev/nvme0n1", 00:14:15.700 "name": "xnvme_bdev" 00:14:15.700 }, 00:14:15.700 "method": "bdev_xnvme_create" 00:14:15.700 }, 00:14:15.700 { 00:14:15.700 "method": "bdev_wait_for_examine" 00:14:15.700 } 00:14:15.700 ] 00:14:15.700 } 00:14:15.700 ] 00:14:15.700 } 00:14:15.700 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:15.700 fio-3.35 00:14:15.700 Starting 1 thread 00:14:22.266 00:14:22.267 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70752: Wed Nov 20 17:44:44 2024 00:14:22.267 read: IOPS=48.4k, BW=189MiB/s (198MB/s)(947MiB/5008msec) 00:14:22.267 slat (nsec): min=2781, max=84664, avg=3099.23, stdev=1271.58 00:14:22.267 clat (usec): min=470, max=8906, avg=1199.93, stdev=271.47 00:14:22.267 lat (usec): min=473, max=8909, avg=1203.03, stdev=271.75 00:14:22.267 clat percentiles (usec): 00:14:22.267 | 1.00th=[ 799], 5.00th=[ 865], 10.00th=[ 914], 20.00th=[ 996], 00:14:22.267 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1172], 60.00th=[ 1221], 00:14:22.267 | 70.00th=[ 1270], 80.00th=[ 1336], 90.00th=[ 1483], 95.00th=[ 1663], 00:14:22.267 | 99.00th=[ 2114], 99.50th=[ 2343], 99.90th=[ 3228], 99.95th=[ 3687], 00:14:22.267 | 99.99th=[ 5735] 00:14:22.267 bw ( KiB/s): min=141540, max=224768, per=100.00%, avg=193990.80, stdev=22223.01, samples=10 00:14:22.267 iops : min=35385, max=56192, avg=48497.70, stdev=5555.75, samples=10 00:14:22.267 lat (usec) : 500=0.01%, 750=0.17%, 1000=20.46% 00:14:22.267 lat (msec) : 2=77.94%, 4=1.39%, 10=0.03% 00:14:22.267 cpu : usr=73.84%, sys=22.97%, ctx=13, majf=0, minf=762 00:14:22.267 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.1%, 32=50.4%, >=64=1.6% 00:14:22.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.267 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:22.267 issued rwts: total=242533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.267 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:22.267 00:14:22.267 Run status group 0 (all jobs): 00:14:22.267 READ: bw=189MiB/s (198MB/s), 189MiB/s-189MiB/s (198MB/s-198MB/s), io=947MiB (993MB), run=5008-5008msec 00:14:22.528 ----------------------------------------------------- 00:14:22.528 Suppressions used: 00:14:22.528 count bytes template 00:14:22.528 1 11 /usr/src/fio/parse.c 00:14:22.528 1 8 libtcmalloc_minimal.so 00:14:22.528 1 904 libcrypto.so 00:14:22.528 ----------------------------------------------------- 00:14:22.528 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:22.528 17:44:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:22.528 17:44:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:22.528 17:44:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:22.528 17:44:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:22.528 17:44:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:22.528 17:44:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:22.528 { 00:14:22.528 "subsystems": [ 00:14:22.528 { 00:14:22.528 "subsystem": "bdev", 00:14:22.528 "config": [ 00:14:22.528 { 00:14:22.528 "params": { 00:14:22.528 "io_mechanism": "io_uring", 00:14:22.528 "conserve_cpu": true, 00:14:22.528 "filename": "/dev/nvme0n1", 00:14:22.528 "name": "xnvme_bdev" 00:14:22.528 }, 00:14:22.528 "method": "bdev_xnvme_create" 00:14:22.528 }, 00:14:22.528 { 00:14:22.528 "method": "bdev_wait_for_examine" 00:14:22.528 } 00:14:22.528 ] 00:14:22.528 } 00:14:22.528 ] 00:14:22.528 } 00:14:22.787 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:22.787 fio-3.35 00:14:22.787 Starting 1 thread 00:14:29.354 00:14:29.354 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70838: Wed Nov 20 17:44:51 2024 00:14:29.354 write: IOPS=45.0k, BW=176MiB/s (184MB/s)(879MiB/5003msec); 0 zone resets 00:14:29.354 slat (nsec): min=2835, max=98297, avg=3620.18, stdev=1625.98 00:14:29.354 clat (usec): min=249, max=11447, avg=1283.30, stdev=309.92 00:14:29.354 lat (usec): min=253, max=11457, avg=1286.92, stdev=310.34 00:14:29.354 clat percentiles (usec): 00:14:29.354 | 1.00th=[ 840], 5.00th=[ 938], 10.00th=[ 1004], 20.00th=[ 1106], 00:14:29.354 | 30.00th=[ 1156], 40.00th=[ 1205], 50.00th=[ 1254], 60.00th=[ 1287], 00:14:29.354 | 70.00th=[ 1336], 80.00th=[ 1418], 90.00th=[ 1582], 95.00th=[ 1745], 00:14:29.354 | 99.00th=[ 2073], 99.50th=[ 2278], 99.90th=[ 5014], 99.95th=[ 5997], 00:14:29.354 | 99.99th=[ 8979] 00:14:29.354 bw ( KiB/s): min=150008, max=191488, per=99.92%, avg=179697.00, stdev=16209.17, samples=9 00:14:29.354 iops : min=37502, max=47872, avg=44924.22, stdev=4052.34, samples=9 00:14:29.354 lat (usec) : 250=0.01%, 500=0.03%, 750=0.10%, 1000=9.29% 00:14:29.354 lat (msec) : 2=89.13%, 4=1.31%, 10=0.14%, 20=0.01% 00:14:29.354 cpu : usr=73.07%, sys=23.91%, ctx=17, majf=0, minf=764 00:14:29.354 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.3%, >=64=1.6% 00:14:29.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.354 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:29.354 issued rwts: total=0,224926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:29.354 00:14:29.354 Run status group 0 (all jobs): 00:14:29.354 WRITE: bw=176MiB/s (184MB/s), 176MiB/s-176MiB/s (184MB/s-184MB/s), io=879MiB (921MB), run=5003-5003msec 00:14:29.354 ----------------------------------------------------- 00:14:29.354 Suppressions used: 00:14:29.354 count bytes template 00:14:29.354 1 11 /usr/src/fio/parse.c 00:14:29.354 1 8 libtcmalloc_minimal.so 00:14:29.354 1 904 libcrypto.so 00:14:29.354 ----------------------------------------------------- 00:14:29.354 00:14:29.354 00:14:29.354 real 0m13.829s 00:14:29.354 user 0m10.185s 00:14:29.354 sys 0m2.999s 00:14:29.354 17:44:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.354 17:44:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:29.354 ************************************ 00:14:29.354 END TEST xnvme_fio_plugin 00:14:29.354 ************************************ 00:14:29.354 17:44:52 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:29.354 17:44:52 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:29.354 17:44:52 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:29.354 17:44:52 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:29.354 17:44:52 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:29.354 17:44:52 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:29.354 17:44:52 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:29.354 17:44:52 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:29.354 17:44:52 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:29.354 17:44:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:29.354 17:44:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.354 17:44:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.354 ************************************ 00:14:29.354 START TEST xnvme_rpc 00:14:29.354 ************************************ 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:29.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70924 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70924 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70924 ']' 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.354 17:44:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.612 [2024-11-20 17:44:52.962137] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:14:29.612 [2024-11-20 17:44:52.962402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70924 ] 00:14:29.612 [2024-11-20 17:44:53.120277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.872 [2024-11-20 17:44:53.215884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.443 xnvme_bdev 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.443 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:30.704 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:30.704 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:30.704 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.704 17:44:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:30.704 17:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70924 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70924 ']' 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70924 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70924 00:14:30.704 killing process with pid 70924 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70924' 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70924 00:14:30.704 17:44:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70924 00:14:32.125 00:14:32.125 real 0m2.686s 00:14:32.125 user 0m2.761s 00:14:32.125 sys 0m0.392s 00:14:32.125 17:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.125 ************************************ 00:14:32.125 END TEST xnvme_rpc 00:14:32.125 ************************************ 00:14:32.125 17:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.125 17:44:55 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:32.125 17:44:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.125 17:44:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.125 17:44:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.125 ************************************ 00:14:32.125 START TEST xnvme_bdevperf 00:14:32.125 ************************************ 00:14:32.125 17:44:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:32.125 17:44:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:32.125 17:44:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:32.125 17:44:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:32.125 17:44:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:32.125 17:44:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:32.125 17:44:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:32.125 17:44:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:32.418 { 00:14:32.418 "subsystems": [ 00:14:32.418 { 00:14:32.418 "subsystem": "bdev", 00:14:32.418 "config": [ 00:14:32.418 { 00:14:32.418 "params": { 00:14:32.418 "io_mechanism": "io_uring_cmd", 00:14:32.418 "conserve_cpu": false, 00:14:32.418 "filename": "/dev/ng0n1", 00:14:32.418 "name": "xnvme_bdev" 00:14:32.418 }, 00:14:32.418 "method": "bdev_xnvme_create" 00:14:32.418 }, 00:14:32.418 { 00:14:32.418 "method": "bdev_wait_for_examine" 00:14:32.418 } 00:14:32.418 ] 00:14:32.418 } 00:14:32.418 ] 00:14:32.418 } 00:14:32.418 [2024-11-20 17:44:55.695677] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:14:32.418 [2024-11-20 17:44:55.695811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70993 ] 00:14:32.418 [2024-11-20 17:44:55.857209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.676 [2024-11-20 17:44:55.956511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.677 Running I/O for 5 seconds... 00:14:34.998 41034.00 IOPS, 160.29 MiB/s [2024-11-20T17:44:59.481Z] 38682.00 IOPS, 151.10 MiB/s [2024-11-20T17:45:00.423Z] 36864.33 IOPS, 144.00 MiB/s [2024-11-20T17:45:01.356Z] 36564.00 IOPS, 142.83 MiB/s 00:14:37.816 Latency(us) 00:14:37.816 [2024-11-20T17:45:01.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.816 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:37.816 xnvme_bdev : 5.00 36869.58 144.02 0.00 0.00 1732.03 106.34 146800.64 00:14:37.816 [2024-11-20T17:45:01.356Z] =================================================================================================================== 00:14:37.816 [2024-11-20T17:45:01.356Z] Total : 36869.58 144.02 0.00 0.00 1732.03 106.34 146800.64 00:14:38.381 17:45:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.381 17:45:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:38.381 17:45:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:38.381 17:45:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:38.381 17:45:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:38.639 { 00:14:38.639 "subsystems": [ 00:14:38.639 { 00:14:38.639 "subsystem": "bdev", 00:14:38.639 "config": [ 00:14:38.639 { 00:14:38.639 "params": { 00:14:38.639 "io_mechanism": "io_uring_cmd", 00:14:38.639 "conserve_cpu": false, 00:14:38.639 "filename": "/dev/ng0n1", 00:14:38.639 "name": "xnvme_bdev" 00:14:38.639 }, 00:14:38.639 "method": "bdev_xnvme_create" 00:14:38.639 }, 00:14:38.639 { 00:14:38.639 "method": "bdev_wait_for_examine" 00:14:38.639 } 00:14:38.639 ] 00:14:38.639 } 00:14:38.639 ] 00:14:38.639 } 00:14:38.639 [2024-11-20 17:45:01.978096] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:14:38.639 [2024-11-20 17:45:01.978201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71067 ] 00:14:38.639 [2024-11-20 17:45:02.137331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.897 [2024-11-20 17:45:02.232546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.156 Running I/O for 5 seconds... 00:14:41.031 5195.00 IOPS, 20.29 MiB/s [2024-11-20T17:45:05.511Z] 4608.50 IOPS, 18.00 MiB/s [2024-11-20T17:45:06.900Z] 4355.00 IOPS, 17.01 MiB/s [2024-11-20T17:45:07.845Z] 4875.50 IOPS, 19.04 MiB/s [2024-11-20T17:45:07.845Z] 5960.00 IOPS, 23.28 MiB/s 00:14:44.305 Latency(us) 00:14:44.305 [2024-11-20T17:45:07.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.305 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:44.305 xnvme_bdev : 5.12 5836.91 22.80 0.00 0.00 10949.73 128.39 629145.60 00:14:44.305 [2024-11-20T17:45:07.845Z] =================================================================================================================== 00:14:44.305 [2024-11-20T17:45:07.845Z] Total : 5836.91 22.80 0.00 0.00 10949.73 128.39 629145.60 00:14:44.878 17:45:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.878 17:45:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:44.878 17:45:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:44.878 17:45:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:44.878 17:45:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:45.140 { 00:14:45.140 "subsystems": [ 00:14:45.140 { 00:14:45.140 "subsystem": "bdev", 00:14:45.140 "config": [ 00:14:45.140 { 00:14:45.140 "params": { 00:14:45.140 "io_mechanism": "io_uring_cmd", 00:14:45.140 "conserve_cpu": false, 00:14:45.140 "filename": "/dev/ng0n1", 00:14:45.140 "name": "xnvme_bdev" 00:14:45.140 }, 00:14:45.140 "method": "bdev_xnvme_create" 00:14:45.140 }, 00:14:45.140 { 00:14:45.140 "method": "bdev_wait_for_examine" 00:14:45.140 } 00:14:45.140 ] 00:14:45.140 } 00:14:45.140 ] 00:14:45.140 } 00:14:45.140 [2024-11-20 17:45:08.453721] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:14:45.140 [2024-11-20 17:45:08.454100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71141 ] 00:14:45.140 [2024-11-20 17:45:08.615598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.401 [2024-11-20 17:45:08.745274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.662 Running I/O for 5 seconds... 00:14:47.552 76672.00 IOPS, 299.50 MiB/s [2024-11-20T17:45:12.480Z] 74336.00 IOPS, 290.38 MiB/s [2024-11-20T17:45:13.050Z] 80853.33 IOPS, 315.83 MiB/s [2024-11-20T17:45:14.431Z] 83056.00 IOPS, 324.44 MiB/s [2024-11-20T17:45:14.431Z] 85350.40 IOPS, 333.40 MiB/s 00:14:50.891 Latency(us) 00:14:50.891 [2024-11-20T17:45:14.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.891 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:50.891 xnvme_bdev : 5.00 85307.65 333.23 0.00 0.00 746.71 441.11 3881.75 00:14:50.891 [2024-11-20T17:45:14.431Z] =================================================================================================================== 00:14:50.891 [2024-11-20T17:45:14.431Z] Total : 85307.65 333.23 0.00 0.00 746.71 441.11 3881.75 00:14:51.460 17:45:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.460 17:45:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:51.460 17:45:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:51.460 17:45:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:51.460 17:45:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:51.460 { 00:14:51.460 "subsystems": [ 00:14:51.460 { 00:14:51.460 "subsystem": "bdev", 00:14:51.460 "config": [ 00:14:51.460 { 00:14:51.460 "params": { 00:14:51.460 "io_mechanism": "io_uring_cmd", 00:14:51.460 "conserve_cpu": false, 00:14:51.460 "filename": "/dev/ng0n1", 00:14:51.460 "name": "xnvme_bdev" 00:14:51.460 }, 00:14:51.460 "method": "bdev_xnvme_create" 00:14:51.460 }, 00:14:51.460 { 00:14:51.460 "method": "bdev_wait_for_examine" 00:14:51.460 } 00:14:51.460 ] 00:14:51.460 } 00:14:51.460 ] 00:14:51.460 } 00:14:51.460 [2024-11-20 17:45:14.814691] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:14:51.460 [2024-11-20 17:45:14.814806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71220 ] 00:14:51.460 [2024-11-20 17:45:14.974494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.720 [2024-11-20 17:45:15.073268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.978 Running I/O for 5 seconds... 00:14:53.852 1549.00 IOPS, 6.05 MiB/s [2024-11-20T17:45:18.326Z] 1242.00 IOPS, 4.85 MiB/s [2024-11-20T17:45:19.346Z] 964.67 IOPS, 3.77 MiB/s [2024-11-20T17:45:20.720Z] 798.75 IOPS, 3.12 MiB/s [2024-11-20T17:45:20.720Z] 716.80 IOPS, 2.80 MiB/s 00:14:57.180 Latency(us) 00:14:57.180 [2024-11-20T17:45:20.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.180 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:57.180 xnvme_bdev : 5.14 709.27 2.77 0.00 0.00 89348.69 47.06 429109.56 00:14:57.180 [2024-11-20T17:45:20.720Z] =================================================================================================================== 00:14:57.180 [2024-11-20T17:45:20.720Z] Total : 709.27 2.77 0.00 0.00 89348.69 47.06 429109.56 00:14:57.746 ************************************ 00:14:57.746 END TEST xnvme_bdevperf 00:14:57.746 ************************************ 00:14:57.746 00:14:57.746 real 0m25.400s 00:14:57.746 user 0m14.747s 00:14:57.746 sys 0m10.219s 00:14:57.746 17:45:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:57.746 17:45:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:57.746 17:45:21 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:57.746 17:45:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:57.746 17:45:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:57.746 17:45:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:57.746 ************************************ 00:14:57.746 START TEST xnvme_fio_plugin 00:14:57.746 ************************************ 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:57.746 17:45:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:57.746 { 00:14:57.746 "subsystems": [ 00:14:57.746 { 00:14:57.746 "subsystem": "bdev", 00:14:57.746 "config": [ 00:14:57.746 { 00:14:57.746 "params": { 00:14:57.746 "io_mechanism": "io_uring_cmd", 00:14:57.746 "conserve_cpu": false, 00:14:57.746 "filename": "/dev/ng0n1", 00:14:57.746 "name": "xnvme_bdev" 00:14:57.746 }, 00:14:57.746 "method": "bdev_xnvme_create" 00:14:57.746 }, 00:14:57.746 { 00:14:57.746 "method": "bdev_wait_for_examine" 00:14:57.746 } 00:14:57.746 ] 00:14:57.746 } 00:14:57.746 ] 00:14:57.746 } 00:14:58.004 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:58.004 fio-3.35 00:14:58.004 Starting 1 thread 00:15:04.562 00:15:04.562 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71334: Wed Nov 20 17:45:26 2024 00:15:04.562 read: IOPS=56.0k, BW=219MiB/s (230MB/s)(1095MiB/5001msec) 00:15:04.562 slat (usec): min=2, max=107, avg= 3.52, stdev= 1.44 00:15:04.562 clat (usec): min=332, max=12575, avg=1006.57, stdev=285.82 00:15:04.562 lat (usec): min=334, max=12578, avg=1010.08, stdev=286.06 00:15:04.562 clat percentiles (usec): 00:15:04.562 | 1.00th=[ 668], 5.00th=[ 709], 10.00th=[ 734], 20.00th=[ 783], 00:15:04.562 | 30.00th=[ 832], 40.00th=[ 873], 50.00th=[ 922], 60.00th=[ 1004], 00:15:04.562 | 70.00th=[ 1106], 80.00th=[ 1221], 90.00th=[ 1369], 95.00th=[ 1500], 00:15:04.562 | 99.00th=[ 1860], 99.50th=[ 2008], 99.90th=[ 2802], 99.95th=[ 3130], 00:15:04.562 | 99.99th=[ 5800] 00:15:04.562 bw ( KiB/s): min=189424, max=269824, per=100.00%, avg=228849.22, stdev=29960.19, samples=9 00:15:04.562 iops : min=47356, max=67456, avg=57212.22, stdev=7489.97, samples=9 00:15:04.562 lat (usec) : 500=0.02%, 750=12.87%, 1000=46.85% 00:15:04.562 lat (msec) : 2=39.74%, 4=0.50%, 10=0.02%, 20=0.01% 00:15:04.562 cpu : usr=39.70%, sys=59.42%, ctx=33, majf=0, minf=762 00:15:04.562 IO depths : 1=1.4%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.5%, >=64=1.6% 00:15:04.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.562 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:04.562 issued rwts: total=280272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.562 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:04.562 00:15:04.562 Run status group 0 (all jobs): 00:15:04.562 READ: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s), io=1095MiB (1148MB), run=5001-5001msec 00:15:04.562 ----------------------------------------------------- 00:15:04.562 Suppressions used: 00:15:04.562 count bytes template 00:15:04.562 1 11 /usr/src/fio/parse.c 00:15:04.562 1 8 libtcmalloc_minimal.so 00:15:04.562 1 904 libcrypto.so 00:15:04.562 ----------------------------------------------------- 00:15:04.562 00:15:04.562 17:45:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:04.562 17:45:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.562 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.562 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:04.562 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:04.562 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:04.562 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.562 17:45:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:04.562 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:04.562 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:04.563 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.563 17:45:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:04.563 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.563 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:04.563 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:04.563 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:04.563 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:04.563 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:04.563 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:04.563 17:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.563 { 00:15:04.563 "subsystems": [ 00:15:04.563 { 00:15:04.563 "subsystem": "bdev", 00:15:04.563 "config": [ 00:15:04.563 { 00:15:04.563 "params": { 00:15:04.563 "io_mechanism": "io_uring_cmd", 00:15:04.563 "conserve_cpu": false, 00:15:04.563 "filename": "/dev/ng0n1", 00:15:04.563 "name": "xnvme_bdev" 00:15:04.563 }, 00:15:04.563 "method": "bdev_xnvme_create" 00:15:04.563 }, 00:15:04.563 { 00:15:04.563 "method": "bdev_wait_for_examine" 00:15:04.563 } 00:15:04.563 ] 00:15:04.563 } 00:15:04.563 ] 00:15:04.563 } 00:15:04.563 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:04.563 fio-3.35 00:15:04.563 Starting 1 thread 00:15:11.127 00:15:11.127 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71419: Wed Nov 20 17:45:33 2024 00:15:11.127 write: IOPS=35.5k, BW=139MiB/s (146MB/s)(717MiB/5161msec); 0 zone resets 00:15:11.127 slat (nsec): min=2208, max=74510, avg=3755.04, stdev=1799.04 00:15:11.127 clat (usec): min=60, max=175485, avg=1666.82, stdev=4983.58 00:15:11.127 lat (usec): min=64, max=175489, avg=1670.57, stdev=4983.60 00:15:11.127 clat percentiles (usec): 00:15:11.127 | 1.00th=[ 510], 5.00th=[ 783], 10.00th=[ 922], 20.00th=[ 1106], 00:15:11.127 | 30.00th=[ 1221], 40.00th=[ 1303], 50.00th=[ 1369], 60.00th=[ 1450], 00:15:11.127 | 70.00th=[ 1516], 80.00th=[ 1631], 90.00th=[ 1860], 95.00th=[ 2278], 00:15:11.127 | 99.00th=[ 5800], 99.50th=[ 8455], 99.90th=[ 98042], 99.95th=[166724], 00:15:11.127 | 99.99th=[173016] 00:15:11.127 bw ( KiB/s): min=60544, max=171080, per=100.00%, avg=146722.40, stdev=32908.00, samples=10 00:15:11.127 iops : min=15136, max=42770, avg=36680.60, stdev=8227.00, samples=10 00:15:11.127 lat (usec) : 100=0.01%, 250=0.08%, 500=0.84%, 750=3.22%, 1000=9.18% 00:15:11.127 lat (msec) : 2=79.15%, 4=5.47%, 10=1.66%, 20=0.13%, 50=0.15% 00:15:11.127 lat (msec) : 100=0.01%, 250=0.09% 00:15:11.127 cpu : usr=37.00%, sys=62.03%, ctx=8, majf=0, minf=764 00:15:11.127 IO depths : 1=1.2%, 2=2.4%, 4=4.9%, 8=10.4%, 16=23.0%, 32=56.0%, >=64=2.2% 00:15:11.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.127 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.5%, >=64=0.0% 00:15:11.127 issued rwts: total=0,183467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:11.127 00:15:11.127 Run status group 0 (all jobs): 00:15:11.127 WRITE: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=717MiB (751MB), run=5161-5161msec 00:15:11.127 ----------------------------------------------------- 00:15:11.127 Suppressions used: 00:15:11.127 count bytes template 00:15:11.127 1 11 /usr/src/fio/parse.c 00:15:11.127 1 8 libtcmalloc_minimal.so 00:15:11.127 1 904 libcrypto.so 00:15:11.127 ----------------------------------------------------- 00:15:11.127 00:15:11.128 00:15:11.128 real 0m13.536s 00:15:11.128 user 0m6.452s 00:15:11.128 sys 0m6.671s 00:15:11.128 17:45:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.128 17:45:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:11.128 ************************************ 00:15:11.128 END TEST xnvme_fio_plugin 00:15:11.128 ************************************ 00:15:11.388 17:45:34 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:11.388 17:45:34 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:11.388 17:45:34 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:11.388 17:45:34 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:11.388 17:45:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:11.388 17:45:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.389 17:45:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:11.389 ************************************ 00:15:11.389 START TEST xnvme_rpc 00:15:11.389 ************************************ 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71503 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71503 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71503 ']' 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.389 17:45:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.389 [2024-11-20 17:45:34.786818] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:15:11.389 [2024-11-20 17:45:34.787014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71503 ] 00:15:11.650 [2024-11-20 17:45:34.949309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.650 [2024-11-20 17:45:35.075012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.594 xnvme_bdev 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:12.594 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.595 17:45:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.595 [2024-11-20 17:45:35.923713] bdev.c:5263:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 0 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71503 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71503 ']' 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71503 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71503 00:15:13.981 killing process with pid 71503 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71503' 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71503 00:15:13.981 17:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71503 00:15:15.367 00:15:15.367 real 0m4.037s 00:15:15.367 user 0m4.040s 00:15:15.367 sys 0m0.473s 00:15:15.367 17:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.367 ************************************ 00:15:15.367 END TEST xnvme_rpc 00:15:15.367 ************************************ 00:15:15.367 17:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.367 17:45:38 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:15.367 17:45:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:15.367 17:45:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.367 17:45:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.367 ************************************ 00:15:15.367 START TEST xnvme_bdevperf 00:15:15.367 ************************************ 00:15:15.367 17:45:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:15.367 17:45:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:15.367 17:45:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:15.367 17:45:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.367 17:45:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:15.367 17:45:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:15.367 17:45:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.367 17:45:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.367 { 00:15:15.367 "subsystems": [ 00:15:15.367 { 00:15:15.367 "subsystem": "bdev", 00:15:15.367 "config": [ 00:15:15.367 { 00:15:15.367 "params": { 00:15:15.367 "io_mechanism": "io_uring_cmd", 00:15:15.367 "conserve_cpu": true, 00:15:15.367 "filename": "/dev/ng0n1", 00:15:15.367 "name": "xnvme_bdev" 00:15:15.367 }, 00:15:15.367 "method": "bdev_xnvme_create" 00:15:15.367 }, 00:15:15.367 { 00:15:15.367 "method": "bdev_wait_for_examine" 00:15:15.367 } 00:15:15.367 ] 00:15:15.367 } 00:15:15.367 ] 00:15:15.367 } 00:15:15.367 [2024-11-20 17:45:38.834982] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:15:15.367 [2024-11-20 17:45:38.835093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71584 ] 00:15:15.628 [2024-11-20 17:45:38.989223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.628 [2024-11-20 17:45:39.065198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.899 Running I/O for 5 seconds... 00:15:17.777 57599.00 IOPS, 225.00 MiB/s [2024-11-20T17:45:42.689Z] 60228.00 IOPS, 235.27 MiB/s [2024-11-20T17:45:43.625Z] 60976.67 IOPS, 238.19 MiB/s [2024-11-20T17:45:44.565Z] 59325.00 IOPS, 231.74 MiB/s 00:15:21.025 Latency(us) 00:15:21.025 [2024-11-20T17:45:44.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.025 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:21.025 xnvme_bdev : 5.00 56196.17 219.52 0.00 0.00 1134.96 370.22 13308.85 00:15:21.025 [2024-11-20T17:45:44.565Z] =================================================================================================================== 00:15:21.025 [2024-11-20T17:45:44.565Z] Total : 56196.17 219.52 0.00 0.00 1134.96 370.22 13308.85 00:15:21.599 17:45:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.599 17:45:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:21.599 17:45:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:21.599 17:45:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:21.599 17:45:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.599 { 00:15:21.599 "subsystems": [ 00:15:21.599 { 00:15:21.599 "subsystem": "bdev", 00:15:21.599 "config": [ 00:15:21.599 { 00:15:21.599 "params": { 00:15:21.599 "io_mechanism": "io_uring_cmd", 00:15:21.599 "conserve_cpu": true, 00:15:21.599 "filename": "/dev/ng0n1", 00:15:21.599 "name": "xnvme_bdev" 00:15:21.599 }, 00:15:21.599 "method": "bdev_xnvme_create" 00:15:21.599 }, 00:15:21.599 { 00:15:21.599 "method": "bdev_wait_for_examine" 00:15:21.599 } 00:15:21.599 ] 00:15:21.599 } 00:15:21.599 ] 00:15:21.599 } 00:15:21.599 [2024-11-20 17:45:45.113217] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:15:21.599 [2024-11-20 17:45:45.113539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71658 ] 00:15:21.860 [2024-11-20 17:45:45.276619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.121 [2024-11-20 17:45:45.401890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.418 Running I/O for 5 seconds... 00:15:24.313 21592.00 IOPS, 84.34 MiB/s [2024-11-20T17:45:48.787Z] 15966.50 IOPS, 62.37 MiB/s [2024-11-20T17:45:49.731Z] 15730.67 IOPS, 61.45 MiB/s [2024-11-20T17:45:51.113Z] 16227.50 IOPS, 63.39 MiB/s [2024-11-20T17:45:51.113Z] 15259.00 IOPS, 59.61 MiB/s 00:15:27.573 Latency(us) 00:15:27.573 [2024-11-20T17:45:51.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.573 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:27.573 xnvme_bdev : 5.06 15094.31 58.96 0.00 0.00 4233.07 48.64 432335.95 00:15:27.573 [2024-11-20T17:45:51.113Z] =================================================================================================================== 00:15:27.573 [2024-11-20T17:45:51.113Z] Total : 15094.31 58.96 0.00 0.00 4233.07 48.64 432335.95 00:15:28.146 17:45:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:28.146 17:45:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:28.146 17:45:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:28.146 17:45:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:28.146 17:45:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:28.146 { 00:15:28.146 "subsystems": [ 00:15:28.146 { 00:15:28.146 "subsystem": "bdev", 00:15:28.146 "config": [ 00:15:28.146 { 00:15:28.146 "params": { 00:15:28.146 "io_mechanism": "io_uring_cmd", 00:15:28.146 "conserve_cpu": true, 00:15:28.146 "filename": "/dev/ng0n1", 00:15:28.146 "name": "xnvme_bdev" 00:15:28.146 }, 00:15:28.146 "method": "bdev_xnvme_create" 00:15:28.146 }, 00:15:28.146 { 00:15:28.146 "method": "bdev_wait_for_examine" 00:15:28.146 } 00:15:28.146 ] 00:15:28.146 } 00:15:28.146 ] 00:15:28.146 } 00:15:28.146 [2024-11-20 17:45:51.634290] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:15:28.146 [2024-11-20 17:45:51.634453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71739 ] 00:15:28.408 [2024-11-20 17:45:51.799069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.408 [2024-11-20 17:45:51.921931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.673 Running I/O for 5 seconds... 00:15:30.992 98880.00 IOPS, 386.25 MiB/s [2024-11-20T17:45:55.464Z] 99648.00 IOPS, 389.25 MiB/s [2024-11-20T17:45:56.398Z] 100949.33 IOPS, 394.33 MiB/s [2024-11-20T17:45:57.332Z] 99216.00 IOPS, 387.56 MiB/s 00:15:33.792 Latency(us) 00:15:33.792 [2024-11-20T17:45:57.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.792 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:33.792 xnvme_bdev : 5.00 97007.25 378.93 0.00 0.00 656.38 346.58 2873.50 00:15:33.792 [2024-11-20T17:45:57.332Z] =================================================================================================================== 00:15:33.792 [2024-11-20T17:45:57.332Z] Total : 97007.25 378.93 0.00 0.00 656.38 346.58 2873.50 00:15:34.360 17:45:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:34.360 17:45:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:34.360 17:45:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:34.360 17:45:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:34.360 17:45:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:34.360 { 00:15:34.360 "subsystems": [ 00:15:34.360 { 00:15:34.360 "subsystem": "bdev", 00:15:34.360 "config": [ 00:15:34.360 { 00:15:34.360 "params": { 00:15:34.360 "io_mechanism": "io_uring_cmd", 00:15:34.360 "conserve_cpu": true, 00:15:34.360 "filename": "/dev/ng0n1", 00:15:34.360 "name": "xnvme_bdev" 00:15:34.360 }, 00:15:34.360 "method": "bdev_xnvme_create" 00:15:34.360 }, 00:15:34.360 { 00:15:34.360 "method": "bdev_wait_for_examine" 00:15:34.360 } 00:15:34.360 ] 00:15:34.360 } 00:15:34.360 ] 00:15:34.360 } 00:15:34.360 [2024-11-20 17:45:57.845182] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:15:34.360 [2024-11-20 17:45:57.845298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71808 ] 00:15:34.618 [2024-11-20 17:45:58.001416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.618 [2024-11-20 17:45:58.080991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.877 Running I/O for 5 seconds... 00:15:36.743 10305.00 IOPS, 40.25 MiB/s [2024-11-20T17:46:01.661Z] 5586.00 IOPS, 21.82 MiB/s [2024-11-20T17:46:02.597Z] 6734.00 IOPS, 26.30 MiB/s [2024-11-20T17:46:03.732Z] 6877.75 IOPS, 26.87 MiB/s [2024-11-20T17:46:03.990Z] 6970.60 IOPS, 27.23 MiB/s 00:15:40.450 Latency(us) 00:15:40.450 [2024-11-20T17:46:03.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.450 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:40.450 xnvme_bdev : 5.45 6412.28 25.05 0.00 0.00 9969.05 69.32 696899.74 00:15:40.450 [2024-11-20T17:46:03.990Z] =================================================================================================================== 00:15:40.450 [2024-11-20T17:46:03.990Z] Total : 6412.28 25.05 0.00 0.00 9969.05 69.32 696899.74 00:15:41.017 00:15:41.017 real 0m25.652s 00:15:41.017 user 0m19.340s 00:15:41.017 sys 0m5.290s 00:15:41.017 ************************************ 00:15:41.017 END TEST xnvme_bdevperf 00:15:41.017 ************************************ 00:15:41.017 17:46:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.017 17:46:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:41.017 17:46:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:41.017 17:46:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:41.017 17:46:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.017 17:46:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.017 ************************************ 00:15:41.017 START TEST xnvme_fio_plugin 00:15:41.017 ************************************ 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.017 17:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.017 { 00:15:41.017 "subsystems": [ 00:15:41.017 { 00:15:41.017 "subsystem": "bdev", 00:15:41.017 "config": [ 00:15:41.017 { 00:15:41.017 "params": { 00:15:41.017 "io_mechanism": "io_uring_cmd", 00:15:41.017 "conserve_cpu": true, 00:15:41.017 "filename": "/dev/ng0n1", 00:15:41.017 "name": "xnvme_bdev" 00:15:41.017 }, 00:15:41.017 "method": "bdev_xnvme_create" 00:15:41.017 }, 00:15:41.017 { 00:15:41.017 "method": "bdev_wait_for_examine" 00:15:41.017 } 00:15:41.017 ] 00:15:41.017 } 00:15:41.017 ] 00:15:41.017 } 00:15:41.278 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:41.278 fio-3.35 00:15:41.278 Starting 1 thread 00:15:47.865 00:15:47.865 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71927: Wed Nov 20 17:46:10 2024 00:15:47.865 read: IOPS=39.7k, BW=155MiB/s (163MB/s)(775MiB/5001msec) 00:15:47.865 slat (nsec): min=2782, max=67031, avg=3591.90, stdev=1836.30 00:15:47.865 clat (usec): min=680, max=3543, avg=1467.30, stdev=272.76 00:15:47.865 lat (usec): min=684, max=3575, avg=1470.90, stdev=273.26 00:15:47.865 clat percentiles (usec): 00:15:47.865 | 1.00th=[ 873], 5.00th=[ 1057], 10.00th=[ 1123], 20.00th=[ 1221], 00:15:47.865 | 30.00th=[ 1319], 40.00th=[ 1401], 50.00th=[ 1467], 60.00th=[ 1532], 00:15:47.865 | 70.00th=[ 1598], 80.00th=[ 1680], 90.00th=[ 1811], 95.00th=[ 1942], 00:15:47.865 | 99.00th=[ 2212], 99.50th=[ 2278], 99.90th=[ 2507], 99.95th=[ 2802], 00:15:47.865 | 99.99th=[ 3359] 00:15:47.865 bw ( KiB/s): min=145920, max=199160, per=100.00%, avg=161470.89, stdev=19902.94, samples=9 00:15:47.865 iops : min=36480, max=49790, avg=40367.67, stdev=4975.74, samples=9 00:15:47.865 lat (usec) : 750=0.07%, 1000=3.08% 00:15:47.865 lat (msec) : 2=93.36%, 4=3.49% 00:15:47.865 cpu : usr=53.20%, sys=43.66%, ctx=16, majf=0, minf=762 00:15:47.865 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:47.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.865 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:47.865 issued rwts: total=198527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.865 00:15:47.865 Run status group 0 (all jobs): 00:15:47.865 READ: bw=155MiB/s (163MB/s), 155MiB/s-155MiB/s (163MB/s-163MB/s), io=775MiB (813MB), run=5001-5001msec 00:15:48.125 ----------------------------------------------------- 00:15:48.125 Suppressions used: 00:15:48.125 count bytes template 00:15:48.125 1 11 /usr/src/fio/parse.c 00:15:48.125 1 8 libtcmalloc_minimal.so 00:15:48.125 1 904 libcrypto.so 00:15:48.125 ----------------------------------------------------- 00:15:48.125 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:48.125 17:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:48.125 { 00:15:48.125 "subsystems": [ 00:15:48.125 { 00:15:48.125 "subsystem": "bdev", 00:15:48.125 "config": [ 00:15:48.125 { 00:15:48.125 "params": { 00:15:48.125 "io_mechanism": "io_uring_cmd", 00:15:48.125 "conserve_cpu": true, 00:15:48.125 "filename": "/dev/ng0n1", 00:15:48.125 "name": "xnvme_bdev" 00:15:48.125 }, 00:15:48.125 "method": "bdev_xnvme_create" 00:15:48.125 }, 00:15:48.125 { 00:15:48.125 "method": "bdev_wait_for_examine" 00:15:48.125 } 00:15:48.125 ] 00:15:48.125 } 00:15:48.125 ] 00:15:48.125 } 00:15:48.386 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:48.386 fio-3.35 00:15:48.386 Starting 1 thread 00:15:54.945 00:15:54.945 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72018: Wed Nov 20 17:46:17 2024 00:15:54.945 write: IOPS=19.4k, BW=75.9MiB/s (79.6MB/s)(380MiB/5009msec); 0 zone resets 00:15:54.945 slat (usec): min=2, max=542, avg= 4.09, stdev= 5.32 00:15:54.945 clat (usec): min=51, max=20613, avg=3232.54, stdev=3403.51 00:15:54.945 lat (usec): min=54, max=20617, avg=3236.63, stdev=3403.45 00:15:54.945 clat percentiles (usec): 00:15:54.945 | 1.00th=[ 178], 5.00th=[ 359], 10.00th=[ 490], 20.00th=[ 644], 00:15:54.945 | 30.00th=[ 742], 40.00th=[ 881], 50.00th=[ 1205], 60.00th=[ 2868], 00:15:54.945 | 70.00th=[ 4686], 80.00th=[ 6259], 90.00th=[ 8225], 95.00th=[10159], 00:15:54.945 | 99.00th=[13435], 99.50th=[14484], 99.90th=[17695], 99.95th=[19268], 00:15:54.945 | 99.99th=[20055] 00:15:54.945 bw ( KiB/s): min=60120, max=91000, per=100.00%, avg=77823.20, stdev=9917.67, samples=10 00:15:54.945 iops : min=15030, max=22750, avg=19455.80, stdev=2479.42, samples=10 00:15:54.945 lat (usec) : 100=0.20%, 250=1.36%, 500=8.83%, 750=20.50%, 1000=13.50% 00:15:54.945 lat (msec) : 2=12.65%, 4=8.77%, 10=28.93%, 20=5.24%, 50=0.02% 00:15:54.945 cpu : usr=80.02%, sys=11.26%, ctx=10, majf=0, minf=764 00:15:54.945 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=1.0%, 16=5.0%, 32=84.7%, >=64=9.0% 00:15:54.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.945 complete : 0=0.0%, 4=94.9%, 8=1.4%, 16=1.7%, 32=1.4%, 64=0.5%, >=64=0.0% 00:15:54.945 issued rwts: total=0,97340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.945 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:54.945 00:15:54.945 Run status group 0 (all jobs): 00:15:54.945 WRITE: bw=75.9MiB/s (79.6MB/s), 75.9MiB/s-75.9MiB/s (79.6MB/s-79.6MB/s), io=380MiB (399MB), run=5009-5009msec 00:15:54.945 ----------------------------------------------------- 00:15:54.945 Suppressions used: 00:15:54.945 count bytes template 00:15:54.945 1 11 /usr/src/fio/parse.c 00:15:54.945 1 8 libtcmalloc_minimal.so 00:15:54.945 1 904 libcrypto.so 00:15:54.945 ----------------------------------------------------- 00:15:54.945 00:15:54.945 00:15:54.945 real 0m13.873s 00:15:54.945 user 0m9.666s 00:15:54.945 sys 0m3.276s 00:15:54.945 17:46:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.945 ************************************ 00:15:54.945 END TEST xnvme_fio_plugin 00:15:54.945 ************************************ 00:15:54.945 17:46:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:54.945 Process with pid 71503 is not found 00:15:54.945 17:46:18 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71503 00:15:54.945 17:46:18 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71503 ']' 00:15:54.945 17:46:18 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71503 00:15:54.945 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71503) - No such process 00:15:54.945 17:46:18 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71503 is not found' 00:15:54.945 00:15:54.945 real 3m28.812s 00:15:54.945 user 2m4.204s 00:15:54.945 sys 1m11.096s 00:15:54.945 ************************************ 00:15:54.945 END TEST nvme_xnvme 00:15:54.945 ************************************ 00:15:54.945 17:46:18 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.945 17:46:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.946 17:46:18 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:54.946 17:46:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.946 17:46:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.946 17:46:18 -- common/autotest_common.sh@10 -- # set +x 00:15:55.206 ************************************ 00:15:55.206 START TEST blockdev_xnvme 00:15:55.206 ************************************ 00:15:55.206 17:46:18 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:55.206 * Looking for test storage... 00:15:55.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:55.206 17:46:18 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:55.206 17:46:18 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:55.206 17:46:18 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:55.206 17:46:18 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.206 17:46:18 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:55.206 17:46:18 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.206 17:46:18 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:55.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.206 --rc genhtml_branch_coverage=1 00:15:55.206 --rc genhtml_function_coverage=1 00:15:55.206 --rc genhtml_legend=1 00:15:55.206 --rc geninfo_all_blocks=1 00:15:55.206 --rc geninfo_unexecuted_blocks=1 00:15:55.206 00:15:55.206 ' 00:15:55.206 17:46:18 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:55.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.206 --rc genhtml_branch_coverage=1 00:15:55.206 --rc genhtml_function_coverage=1 00:15:55.206 --rc genhtml_legend=1 00:15:55.206 --rc geninfo_all_blocks=1 00:15:55.206 --rc geninfo_unexecuted_blocks=1 00:15:55.206 00:15:55.206 ' 00:15:55.206 17:46:18 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:55.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.206 --rc genhtml_branch_coverage=1 00:15:55.206 --rc genhtml_function_coverage=1 00:15:55.206 --rc genhtml_legend=1 00:15:55.206 --rc geninfo_all_blocks=1 00:15:55.206 --rc geninfo_unexecuted_blocks=1 00:15:55.206 00:15:55.206 ' 00:15:55.206 17:46:18 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:55.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.207 --rc genhtml_branch_coverage=1 00:15:55.207 --rc genhtml_function_coverage=1 00:15:55.207 --rc genhtml_legend=1 00:15:55.207 --rc geninfo_all_blocks=1 00:15:55.207 --rc geninfo_unexecuted_blocks=1 00:15:55.207 00:15:55.207 ' 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72152 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:55.207 17:46:18 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72152 00:15:55.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.207 17:46:18 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72152 ']' 00:15:55.207 17:46:18 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.207 17:46:18 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.207 17:46:18 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.207 17:46:18 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.207 17:46:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:55.207 [2024-11-20 17:46:18.743991] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:15:55.467 [2024-11-20 17:46:18.744389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72152 ] 00:15:55.467 [2024-11-20 17:46:18.907916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.730 [2024-11-20 17:46:19.040610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.301 17:46:19 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.301 17:46:19 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:56.301 17:46:19 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:56.301 17:46:19 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:56.301 17:46:19 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:56.301 17:46:19 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:56.301 17:46:19 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:56.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:57.442 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:57.442 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:57.442 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:57.442 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:57.442 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:57.442 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:57.443 nvme0n1 00:15:57.443 nvme0n2 00:15:57.443 nvme0n3 00:15:57.443 nvme1n1 00:15:57.443 nvme2n1 00:15:57.443 nvme3n1 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.443 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.443 17:46:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.705 17:46:20 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.705 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:57.705 17:46:20 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.705 17:46:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.705 17:46:20 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.705 17:46:20 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:57.705 17:46:21 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:57.705 17:46:21 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:57.705 17:46:21 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.705 17:46:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.705 17:46:21 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.705 17:46:21 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:57.705 17:46:21 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:57.706 17:46:21 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "477e9821-d6dc-42fe-9a16-ca3090394aab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "477e9821-d6dc-42fe-9a16-ca3090394aab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "dde1146a-a116-4d26-851e-b8128b3e84d9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dde1146a-a116-4d26-851e-b8128b3e84d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "3cf6d9ec-fdff-405d-9912-23905874775e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3cf6d9ec-fdff-405d-9912-23905874775e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "214f87d3-8c0f-4a6c-99b8-36fff8439e70"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "214f87d3-8c0f-4a6c-99b8-36fff8439e70",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "748d9d8e-95ee-47cc-a3d9-1f62c31331dc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "748d9d8e-95ee-47cc-a3d9-1f62c31331dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "766bbd84-6433-4103-8662-1ebdae6b7bb8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "766bbd84-6433-4103-8662-1ebdae6b7bb8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:57.706 17:46:21 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:57.706 17:46:21 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:57.706 17:46:21 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:57.706 17:46:21 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 72152 00:15:57.706 17:46:21 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72152 ']' 00:15:57.706 17:46:21 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72152 00:15:57.706 17:46:21 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:57.706 17:46:21 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.706 17:46:21 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72152 00:15:57.706 killing process with pid 72152 00:15:57.706 17:46:21 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.706 17:46:21 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.706 17:46:21 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72152' 00:15:57.706 17:46:21 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72152 00:15:57.706 17:46:21 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72152 00:15:59.620 17:46:22 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:59.620 17:46:22 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:59.620 17:46:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:59.620 17:46:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.620 17:46:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:59.620 ************************************ 00:15:59.620 START TEST bdev_hello_world 00:15:59.620 ************************************ 00:15:59.620 17:46:22 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:59.620 [2024-11-20 17:46:22.854141] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:15:59.620 [2024-11-20 17:46:22.854285] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72436 ] 00:15:59.620 [2024-11-20 17:46:23.018705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.620 [2024-11-20 17:46:23.144008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.192 [2024-11-20 17:46:23.559998] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:00.192 [2024-11-20 17:46:23.560060] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:00.192 [2024-11-20 17:46:23.560079] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:00.192 [2024-11-20 17:46:23.562178] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:00.192 [2024-11-20 17:46:23.562943] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:00.192 [2024-11-20 17:46:23.563123] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:00.192 [2024-11-20 17:46:23.563458] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:00.192 00:16:00.192 [2024-11-20 17:46:23.563483] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:00.765 00:16:00.765 real 0m1.513s 00:16:00.765 user 0m1.117s 00:16:00.765 sys 0m0.244s 00:16:00.765 17:46:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.765 17:46:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:00.765 ************************************ 00:16:00.765 END TEST bdev_hello_world 00:16:00.765 ************************************ 00:16:01.050 17:46:24 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:01.050 17:46:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:01.050 17:46:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.050 17:46:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.050 ************************************ 00:16:01.050 START TEST bdev_bounds 00:16:01.050 ************************************ 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:01.050 Process bdevio pid: 72473 00:16:01.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72473 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72473' 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72473 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72473 ']' 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.050 17:46:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:01.050 [2024-11-20 17:46:24.419067] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:01.051 [2024-11-20 17:46:24.419336] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72473 ] 00:16:01.051 [2024-11-20 17:46:24.579295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:01.312 [2024-11-20 17:46:24.689750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.312 [2024-11-20 17:46:24.690166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.312 [2024-11-20 17:46:24.690052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.885 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.885 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:01.885 17:46:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:01.885 I/O targets: 00:16:01.885 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:01.885 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:01.885 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:01.885 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:01.885 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:01.885 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:01.885 00:16:01.885 00:16:01.885 CUnit - A unit testing framework for C - Version 2.1-3 00:16:01.885 http://cunit.sourceforge.net/ 00:16:01.885 00:16:01.885 00:16:01.885 Suite: bdevio tests on: nvme3n1 00:16:01.885 Test: blockdev write read block ...passed 00:16:01.885 Test: blockdev write zeroes read block ...passed 00:16:01.885 Test: blockdev write zeroes read no split ...passed 00:16:01.885 Test: blockdev write zeroes read split ...passed 00:16:02.144 Test: blockdev write zeroes read split partial ...passed 00:16:02.144 Test: blockdev reset ...passed 00:16:02.144 Test: blockdev write read 8 blocks ...passed 00:16:02.144 Test: blockdev write read size > 128k ...passed 00:16:02.144 Test: blockdev write read invalid size ...passed 00:16:02.144 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:02.144 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:02.144 Test: blockdev write read max offset ...passed 00:16:02.144 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:02.144 Test: blockdev writev readv 8 blocks ...passed 00:16:02.144 Test: blockdev writev readv 30 x 1block ...passed 00:16:02.144 Test: blockdev writev readv block ...passed 00:16:02.144 Test: blockdev writev readv size > 128k ...passed 00:16:02.144 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:02.144 Test: blockdev comparev and writev ...passed 00:16:02.144 Test: blockdev nvme passthru rw ...passed 00:16:02.144 Test: blockdev nvme passthru vendor specific ...passed 00:16:02.144 Test: blockdev nvme admin passthru ...passed 00:16:02.144 Test: blockdev copy ...passed 00:16:02.144 Suite: bdevio tests on: nvme2n1 00:16:02.144 Test: blockdev write read block ...passed 00:16:02.144 Test: blockdev write zeroes read block ...passed 00:16:02.144 Test: blockdev write zeroes read no split ...passed 00:16:02.144 Test: blockdev write zeroes read split ...passed 00:16:02.144 Test: blockdev write zeroes read split partial ...passed 00:16:02.144 Test: blockdev reset ...passed 00:16:02.144 Test: blockdev write read 8 blocks ...passed 00:16:02.144 Test: blockdev write read size > 128k ...passed 00:16:02.144 Test: blockdev write read invalid size ...passed 00:16:02.144 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:02.144 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:02.144 Test: blockdev write read max offset ...passed 00:16:02.144 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:02.144 Test: blockdev writev readv 8 blocks ...passed 00:16:02.144 Test: blockdev writev readv 30 x 1block ...passed 00:16:02.144 Test: blockdev writev readv block ...passed 00:16:02.144 Test: blockdev writev readv size > 128k ...passed 00:16:02.144 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:02.144 Test: blockdev comparev and writev ...passed 00:16:02.144 Test: blockdev nvme passthru rw ...passed 00:16:02.144 Test: blockdev nvme passthru vendor specific ...passed 00:16:02.144 Test: blockdev nvme admin passthru ...passed 00:16:02.144 Test: blockdev copy ...passed 00:16:02.144 Suite: bdevio tests on: nvme1n1 00:16:02.144 Test: blockdev write read block ...passed 00:16:02.144 Test: blockdev write zeroes read block ...passed 00:16:02.144 Test: blockdev write zeroes read no split ...passed 00:16:02.144 Test: blockdev write zeroes read split ...passed 00:16:02.144 Test: blockdev write zeroes read split partial ...passed 00:16:02.144 Test: blockdev reset ...passed 00:16:02.144 Test: blockdev write read 8 blocks ...passed 00:16:02.144 Test: blockdev write read size > 128k ...passed 00:16:02.144 Test: blockdev write read invalid size ...passed 00:16:02.144 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:02.144 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:02.144 Test: blockdev write read max offset ...passed 00:16:02.144 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:02.144 Test: blockdev writev readv 8 blocks ...passed 00:16:02.144 Test: blockdev writev readv 30 x 1block ...passed 00:16:02.144 Test: blockdev writev readv block ...passed 00:16:02.144 Test: blockdev writev readv size > 128k ...passed 00:16:02.144 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:02.144 Test: blockdev comparev and writev ...passed 00:16:02.144 Test: blockdev nvme passthru rw ...passed 00:16:02.144 Test: blockdev nvme passthru vendor specific ...passed 00:16:02.144 Test: blockdev nvme admin passthru ...passed 00:16:02.144 Test: blockdev copy ...passed 00:16:02.144 Suite: bdevio tests on: nvme0n3 00:16:02.144 Test: blockdev write read block ...passed 00:16:02.144 Test: blockdev write zeroes read block ...passed 00:16:02.144 Test: blockdev write zeroes read no split ...passed 00:16:02.144 Test: blockdev write zeroes read split ...passed 00:16:02.144 Test: blockdev write zeroes read split partial ...passed 00:16:02.144 Test: blockdev reset ...passed 00:16:02.144 Test: blockdev write read 8 blocks ...passed 00:16:02.144 Test: blockdev write read size > 128k ...passed 00:16:02.144 Test: blockdev write read invalid size ...passed 00:16:02.144 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:02.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:02.405 Test: blockdev write read max offset ...passed 00:16:02.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:02.405 Test: blockdev writev readv 8 blocks ...passed 00:16:02.405 Test: blockdev writev readv 30 x 1block ...passed 00:16:02.405 Test: blockdev writev readv block ...passed 00:16:02.405 Test: blockdev writev readv size > 128k ...passed 00:16:02.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:02.405 Test: blockdev comparev and writev ...passed 00:16:02.405 Test: blockdev nvme passthru rw ...passed 00:16:02.405 Test: blockdev nvme passthru vendor specific ...passed 00:16:02.405 Test: blockdev nvme admin passthru ...passed 00:16:02.405 Test: blockdev copy ...passed 00:16:02.405 Suite: bdevio tests on: nvme0n2 00:16:02.405 Test: blockdev write read block ...passed 00:16:02.405 Test: blockdev write zeroes read block ...passed 00:16:02.405 Test: blockdev write zeroes read no split ...passed 00:16:02.405 Test: blockdev write zeroes read split ...passed 00:16:02.405 Test: blockdev write zeroes read split partial ...passed 00:16:02.405 Test: blockdev reset ...passed 00:16:02.405 Test: blockdev write read 8 blocks ...passed 00:16:02.405 Test: blockdev write read size > 128k ...passed 00:16:02.405 Test: blockdev write read invalid size ...passed 00:16:02.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:02.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:02.405 Test: blockdev write read max offset ...passed 00:16:02.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:02.405 Test: blockdev writev readv 8 blocks ...passed 00:16:02.405 Test: blockdev writev readv 30 x 1block ...passed 00:16:02.405 Test: blockdev writev readv block ...passed 00:16:02.405 Test: blockdev writev readv size > 128k ...passed 00:16:02.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:02.405 Test: blockdev comparev and writev ...passed 00:16:02.405 Test: blockdev nvme passthru rw ...passed 00:16:02.405 Test: blockdev nvme passthru vendor specific ...passed 00:16:02.405 Test: blockdev nvme admin passthru ...passed 00:16:02.405 Test: blockdev copy ...passed 00:16:02.405 Suite: bdevio tests on: nvme0n1 00:16:02.405 Test: blockdev write read block ...passed 00:16:02.405 Test: blockdev write zeroes read block ...passed 00:16:02.405 Test: blockdev write zeroes read no split ...passed 00:16:02.405 Test: blockdev write zeroes read split ...passed 00:16:02.405 Test: blockdev write zeroes read split partial ...passed 00:16:02.405 Test: blockdev reset ...passed 00:16:02.405 Test: blockdev write read 8 blocks ...passed 00:16:02.405 Test: blockdev write read size > 128k ...passed 00:16:02.406 Test: blockdev write read invalid size ...passed 00:16:02.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:02.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:02.406 Test: blockdev write read max offset ...passed 00:16:02.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:02.406 Test: blockdev writev readv 8 blocks ...passed 00:16:02.406 Test: blockdev writev readv 30 x 1block ...passed 00:16:02.406 Test: blockdev writev readv block ...passed 00:16:02.406 Test: blockdev writev readv size > 128k ...passed 00:16:02.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:02.406 Test: blockdev comparev and writev ...passed 00:16:02.406 Test: blockdev nvme passthru rw ...passed 00:16:02.406 Test: blockdev nvme passthru vendor specific ...passed 00:16:02.406 Test: blockdev nvme admin passthru ...passed 00:16:02.406 Test: blockdev copy ...passed 00:16:02.406 00:16:02.406 Run Summary: Type Total Ran Passed Failed Inactive 00:16:02.406 suites 6 6 n/a 0 0 00:16:02.406 tests 138 138 138 0 0 00:16:02.406 asserts 780 780 780 0 n/a 00:16:02.406 00:16:02.406 Elapsed time = 1.183 seconds 00:16:02.406 0 00:16:02.406 17:46:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72473 00:16:02.406 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72473 ']' 00:16:02.406 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72473 00:16:02.406 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:02.406 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.406 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72473 00:16:02.406 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.406 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.406 killing process with pid 72473 00:16:02.406 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72473' 00:16:02.406 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72473 00:16:02.406 17:46:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72473 00:16:03.348 17:46:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:03.348 00:16:03.348 real 0m2.288s 00:16:03.348 user 0m5.602s 00:16:03.348 sys 0m0.334s 00:16:03.348 17:46:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.348 17:46:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:03.348 ************************************ 00:16:03.348 END TEST bdev_bounds 00:16:03.348 ************************************ 00:16:03.348 17:46:26 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:03.348 17:46:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:03.348 17:46:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.348 17:46:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:03.348 ************************************ 00:16:03.348 START TEST bdev_nbd 00:16:03.348 ************************************ 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72528 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:03.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72528 /var/tmp/spdk-nbd.sock 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72528 ']' 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:03.349 17:46:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:03.349 [2024-11-20 17:46:26.792511] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:03.349 [2024-11-20 17:46:26.792855] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.610 [2024-11-20 17:46:26.952499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.611 [2024-11-20 17:46:27.084210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:04.184 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.446 1+0 records in 00:16:04.446 1+0 records out 00:16:04.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816088 s, 5.0 MB/s 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:04.446 17:46:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.706 1+0 records in 00:16:04.706 1+0 records out 00:16:04.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528573 s, 7.7 MB/s 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:04.706 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.968 1+0 records in 00:16:04.968 1+0 records out 00:16:04.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010936 s, 3.7 MB/s 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:04.968 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.230 1+0 records in 00:16:05.230 1+0 records out 00:16:05.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117852 s, 3.5 MB/s 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:05.230 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.491 1+0 records in 00:16:05.491 1+0 records out 00:16:05.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000878996 s, 4.7 MB/s 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:05.491 17:46:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.753 1+0 records in 00:16:05.753 1+0 records out 00:16:05.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00155437 s, 2.6 MB/s 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:05.753 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:06.016 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd0", 00:16:06.016 "bdev_name": "nvme0n1" 00:16:06.016 }, 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd1", 00:16:06.016 "bdev_name": "nvme0n2" 00:16:06.016 }, 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd2", 00:16:06.016 "bdev_name": "nvme0n3" 00:16:06.016 }, 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd3", 00:16:06.016 "bdev_name": "nvme1n1" 00:16:06.016 }, 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd4", 00:16:06.016 "bdev_name": "nvme2n1" 00:16:06.016 }, 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd5", 00:16:06.016 "bdev_name": "nvme3n1" 00:16:06.016 } 00:16:06.016 ]' 00:16:06.016 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:06.016 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd0", 00:16:06.016 "bdev_name": "nvme0n1" 00:16:06.016 }, 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd1", 00:16:06.016 "bdev_name": "nvme0n2" 00:16:06.016 }, 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd2", 00:16:06.016 "bdev_name": "nvme0n3" 00:16:06.016 }, 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd3", 00:16:06.016 "bdev_name": "nvme1n1" 00:16:06.016 }, 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd4", 00:16:06.016 "bdev_name": "nvme2n1" 00:16:06.016 }, 00:16:06.016 { 00:16:06.016 "nbd_device": "/dev/nbd5", 00:16:06.016 "bdev_name": "nvme3n1" 00:16:06.016 } 00:16:06.016 ]' 00:16:06.016 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:06.016 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:06.016 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.016 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:06.016 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.016 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:06.016 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.016 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:06.278 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.278 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.278 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.278 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.278 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.278 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.278 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.278 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.278 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.278 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:06.540 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.540 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.540 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.540 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.540 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.540 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.540 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.540 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.540 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.540 17:46:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.801 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:07.063 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:07.063 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:07.063 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:07.063 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.063 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.063 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:07.063 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:07.063 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.063 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.063 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:07.324 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:07.324 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:07.324 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:07.324 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.324 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.324 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:07.324 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:07.324 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.324 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:07.324 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.325 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:07.587 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:07.587 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:07.587 17:46:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:07.587 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:07.849 /dev/nbd0 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.849 1+0 records in 00:16:07.849 1+0 records out 00:16:07.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122868 s, 3.3 MB/s 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:07.849 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:08.111 /dev/nbd1 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.111 1+0 records in 00:16:08.111 1+0 records out 00:16:08.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108252 s, 3.8 MB/s 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:08.111 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:08.372 /dev/nbd10 00:16:08.372 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:08.372 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:08.372 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.373 1+0 records in 00:16:08.373 1+0 records out 00:16:08.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126533 s, 3.2 MB/s 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:08.373 17:46:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:08.634 /dev/nbd11 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.634 1+0 records in 00:16:08.634 1+0 records out 00:16:08.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00226611 s, 1.8 MB/s 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:08.634 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:08.896 /dev/nbd12 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.896 1+0 records in 00:16:08.896 1+0 records out 00:16:08.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974951 s, 4.2 MB/s 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:08.896 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:09.159 /dev/nbd13 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.159 1+0 records in 00:16:09.159 1+0 records out 00:16:09.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00188414 s, 2.2 MB/s 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:09.159 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:09.421 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:09.421 { 00:16:09.421 "nbd_device": "/dev/nbd0", 00:16:09.421 "bdev_name": "nvme0n1" 00:16:09.421 }, 00:16:09.421 { 00:16:09.421 "nbd_device": "/dev/nbd1", 00:16:09.421 "bdev_name": "nvme0n2" 00:16:09.421 }, 00:16:09.421 { 00:16:09.421 "nbd_device": "/dev/nbd10", 00:16:09.421 "bdev_name": "nvme0n3" 00:16:09.421 }, 00:16:09.421 { 00:16:09.421 "nbd_device": "/dev/nbd11", 00:16:09.421 "bdev_name": "nvme1n1" 00:16:09.421 }, 00:16:09.421 { 00:16:09.421 "nbd_device": "/dev/nbd12", 00:16:09.421 "bdev_name": "nvme2n1" 00:16:09.421 }, 00:16:09.422 { 00:16:09.422 "nbd_device": "/dev/nbd13", 00:16:09.422 "bdev_name": "nvme3n1" 00:16:09.422 } 00:16:09.422 ]' 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:09.422 { 00:16:09.422 "nbd_device": "/dev/nbd0", 00:16:09.422 "bdev_name": "nvme0n1" 00:16:09.422 }, 00:16:09.422 { 00:16:09.422 "nbd_device": "/dev/nbd1", 00:16:09.422 "bdev_name": "nvme0n2" 00:16:09.422 }, 00:16:09.422 { 00:16:09.422 "nbd_device": "/dev/nbd10", 00:16:09.422 "bdev_name": "nvme0n3" 00:16:09.422 }, 00:16:09.422 { 00:16:09.422 "nbd_device": "/dev/nbd11", 00:16:09.422 "bdev_name": "nvme1n1" 00:16:09.422 }, 00:16:09.422 { 00:16:09.422 "nbd_device": "/dev/nbd12", 00:16:09.422 "bdev_name": "nvme2n1" 00:16:09.422 }, 00:16:09.422 { 00:16:09.422 "nbd_device": "/dev/nbd13", 00:16:09.422 "bdev_name": "nvme3n1" 00:16:09.422 } 00:16:09.422 ]' 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:09.422 /dev/nbd1 00:16:09.422 /dev/nbd10 00:16:09.422 /dev/nbd11 00:16:09.422 /dev/nbd12 00:16:09.422 /dev/nbd13' 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:09.422 /dev/nbd1 00:16:09.422 /dev/nbd10 00:16:09.422 /dev/nbd11 00:16:09.422 /dev/nbd12 00:16:09.422 /dev/nbd13' 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:09.422 256+0 records in 00:16:09.422 256+0 records out 00:16:09.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124665 s, 84.1 MB/s 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:09.422 17:46:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:09.684 256+0 records in 00:16:09.684 256+0 records out 00:16:09.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238259 s, 4.4 MB/s 00:16:09.684 17:46:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:09.684 17:46:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:09.684 256+0 records in 00:16:09.684 256+0 records out 00:16:09.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155672 s, 6.7 MB/s 00:16:09.684 17:46:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:09.684 17:46:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:09.946 256+0 records in 00:16:09.946 256+0 records out 00:16:09.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222458 s, 4.7 MB/s 00:16:09.946 17:46:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:09.946 17:46:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:10.519 256+0 records in 00:16:10.519 256+0 records out 00:16:10.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.322523 s, 3.3 MB/s 00:16:10.519 17:46:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:10.519 17:46:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:10.519 256+0 records in 00:16:10.519 256+0 records out 00:16:10.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.212138 s, 4.9 MB/s 00:16:10.519 17:46:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:10.519 17:46:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:10.780 256+0 records in 00:16:10.780 256+0 records out 00:16:10.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.249157 s, 4.2 MB/s 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.780 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:11.042 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:11.042 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:11.042 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:11.042 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.042 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.042 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:11.042 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:11.042 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.042 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.042 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:11.302 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:11.302 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:11.302 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:11.302 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.302 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.302 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:11.302 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:11.302 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.302 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.302 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:11.564 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:11.564 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:11.564 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:11.564 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.564 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.564 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:11.564 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:11.564 17:46:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.564 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.564 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:11.825 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:11.825 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:11.825 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:11.825 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.825 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.825 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:11.825 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:11.825 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.825 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.825 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:12.086 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:12.086 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:12.086 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:12.086 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.086 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.086 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:12.086 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:12.086 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.086 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.086 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:12.348 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:12.348 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:12.348 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:12.348 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.348 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.348 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:12.348 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:12.348 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.348 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:12.348 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:12.348 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:12.610 17:46:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:12.900 malloc_lvol_verify 00:16:12.900 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:12.900 1ef7cfa1-cee1-4488-8e36-a02f0cf6653d 00:16:12.900 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:13.161 5b362800-53b6-4916-99a1-b059064d6622 00:16:13.161 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:13.430 /dev/nbd0 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:13.430 mke2fs 1.47.0 (5-Feb-2023) 00:16:13.430 Discarding device blocks: 0/4096 done 00:16:13.430 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:13.430 00:16:13.430 Allocating group tables: 0/1 done 00:16:13.430 Writing inode tables: 0/1 done 00:16:13.430 Creating journal (1024 blocks): done 00:16:13.430 Writing superblocks and filesystem accounting information: 0/1 done 00:16:13.430 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.430 17:46:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72528 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72528 ']' 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72528 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72528 00:16:13.708 killing process with pid 72528 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72528' 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72528 00:16:13.708 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72528 00:16:14.653 ************************************ 00:16:14.653 END TEST bdev_nbd 00:16:14.653 ************************************ 00:16:14.653 17:46:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:14.653 00:16:14.653 real 0m11.254s 00:16:14.653 user 0m15.042s 00:16:14.653 sys 0m3.915s 00:16:14.653 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.653 17:46:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:14.653 17:46:38 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:14.653 17:46:38 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:16:14.653 17:46:38 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:16:14.653 17:46:38 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:14.653 17:46:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:14.653 17:46:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.653 17:46:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:14.653 ************************************ 00:16:14.653 START TEST bdev_fio 00:16:14.653 ************************************ 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:14.653 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.653 17:46:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:14.653 ************************************ 00:16:14.653 START TEST bdev_fio_rw_verify 00:16:14.653 ************************************ 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:14.654 17:46:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:14.914 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:14.914 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:14.914 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:14.914 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:14.914 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:14.914 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:14.914 fio-3.35 00:16:14.914 Starting 6 threads 00:16:27.151 00:16:27.151 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72942: Wed Nov 20 17:46:49 2024 00:16:27.151 read: IOPS=12.8k, BW=50.0MiB/s (52.4MB/s)(500MiB/10001msec) 00:16:27.151 slat (usec): min=2, max=2006, avg= 6.44, stdev=14.23 00:16:27.151 clat (usec): min=107, max=533651, avg=1489.76, stdev=4296.03 00:16:27.151 lat (usec): min=111, max=533670, avg=1496.20, stdev=4296.12 00:16:27.151 clat percentiles (usec): 00:16:27.151 | 50.000th=[ 1319], 99.000th=[ 4047], 99.900th=[ 5669], 00:16:27.151 | 99.990th=[ 6915], 99.999th=[534774] 00:16:27.151 write: IOPS=13.2k, BW=51.7MiB/s (54.2MB/s)(517MiB/10001msec); 0 zone resets 00:16:27.151 slat (usec): min=12, max=4265, avg=48.33, stdev=175.98 00:16:27.151 clat (usec): min=99, max=75191, avg=1832.35, stdev=1515.89 00:16:27.151 lat (usec): min=113, max=75219, avg=1880.69, stdev=1525.81 00:16:27.151 clat percentiles (usec): 00:16:27.151 | 50.000th=[ 1663], 99.000th=[ 4686], 99.900th=[ 6521], 99.990th=[71828], 00:16:27.151 | 99.999th=[74974] 00:16:27.152 bw ( KiB/s): min=47852, max=67100, per=100.00%, avg=53662.67, stdev=837.38, samples=113 00:16:27.152 iops : min=11960, max=16774, avg=13414.38, stdev=209.43, samples=113 00:16:27.152 lat (usec) : 100=0.01%, 250=1.67%, 500=5.32%, 750=7.63%, 1000=10.79% 00:16:27.152 lat (msec) : 2=45.76%, 4=26.88%, 10=1.93%, 100=0.02%, 750=0.01% 00:16:27.152 cpu : usr=42.82%, sys=31.01%, ctx=5083, majf=0, minf=13653 00:16:27.152 IO depths : 1=10.9%, 2=23.3%, 4=51.5%, 8=14.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.152 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.152 issued rwts: total=127921,132448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.152 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:27.152 00:16:27.152 Run status group 0 (all jobs): 00:16:27.152 READ: bw=50.0MiB/s (52.4MB/s), 50.0MiB/s-50.0MiB/s (52.4MB/s-52.4MB/s), io=500MiB (524MB), run=10001-10001msec 00:16:27.152 WRITE: bw=51.7MiB/s (54.2MB/s), 51.7MiB/s-51.7MiB/s (54.2MB/s-54.2MB/s), io=517MiB (543MB), run=10001-10001msec 00:16:27.152 ----------------------------------------------------- 00:16:27.152 Suppressions used: 00:16:27.152 count bytes template 00:16:27.152 6 48 /usr/src/fio/parse.c 00:16:27.152 4444 426624 /usr/src/fio/iolog.c 00:16:27.152 1 8 libtcmalloc_minimal.so 00:16:27.152 1 904 libcrypto.so 00:16:27.152 ----------------------------------------------------- 00:16:27.152 00:16:27.152 00:16:27.152 real 0m12.107s 00:16:27.152 user 0m27.281s 00:16:27.152 sys 0m18.970s 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.152 ************************************ 00:16:27.152 END TEST bdev_fio_rw_verify 00:16:27.152 ************************************ 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "477e9821-d6dc-42fe-9a16-ca3090394aab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "477e9821-d6dc-42fe-9a16-ca3090394aab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "dde1146a-a116-4d26-851e-b8128b3e84d9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dde1146a-a116-4d26-851e-b8128b3e84d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "3cf6d9ec-fdff-405d-9912-23905874775e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3cf6d9ec-fdff-405d-9912-23905874775e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "214f87d3-8c0f-4a6c-99b8-36fff8439e70"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "214f87d3-8c0f-4a6c-99b8-36fff8439e70",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "748d9d8e-95ee-47cc-a3d9-1f62c31331dc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "748d9d8e-95ee-47cc-a3d9-1f62c31331dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "766bbd84-6433-4103-8662-1ebdae6b7bb8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "766bbd84-6433-4103-8662-1ebdae6b7bb8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:27.152 /home/vagrant/spdk_repo/spdk 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:27.152 00:16:27.152 real 0m12.281s 00:16:27.152 user 0m27.347s 00:16:27.152 sys 0m19.055s 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.152 17:46:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:27.152 ************************************ 00:16:27.152 END TEST bdev_fio 00:16:27.152 ************************************ 00:16:27.152 17:46:50 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:27.152 17:46:50 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:27.152 17:46:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:27.152 17:46:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.152 17:46:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.152 ************************************ 00:16:27.152 START TEST bdev_verify 00:16:27.152 ************************************ 00:16:27.152 17:46:50 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:27.152 [2024-11-20 17:46:50.449598] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:27.152 [2024-11-20 17:46:50.449736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73117 ] 00:16:27.152 [2024-11-20 17:46:50.614172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:27.413 [2024-11-20 17:46:50.747936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.413 [2024-11-20 17:46:50.747964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.675 Running I/O for 5 seconds... 00:16:30.006 22624.00 IOPS, 88.38 MiB/s [2024-11-20T17:46:54.491Z] 22754.00 IOPS, 88.88 MiB/s [2024-11-20T17:46:55.457Z] 23029.67 IOPS, 89.96 MiB/s [2024-11-20T17:46:56.402Z] 22928.75 IOPS, 89.57 MiB/s [2024-11-20T17:46:56.402Z] 22848.20 IOPS, 89.25 MiB/s 00:16:32.862 Latency(us) 00:16:32.862 [2024-11-20T17:46:56.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.862 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0x0 length 0x80000 00:16:32.862 nvme0n1 : 5.05 1823.25 7.12 0.00 0.00 70068.92 6125.10 73400.32 00:16:32.862 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0x80000 length 0x80000 00:16:32.862 nvme0n1 : 5.02 1759.34 6.87 0.00 0.00 72619.51 9376.69 84289.38 00:16:32.862 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0x0 length 0x80000 00:16:32.862 nvme0n2 : 5.06 1797.08 7.02 0.00 0.00 70952.78 9931.22 76223.41 00:16:32.862 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0x80000 length 0x80000 00:16:32.862 nvme0n2 : 5.05 1772.89 6.93 0.00 0.00 71920.92 14518.74 76223.41 00:16:32.862 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0x0 length 0x80000 00:16:32.862 nvme0n3 : 5.06 1796.42 7.02 0.00 0.00 70864.00 11443.59 73803.62 00:16:32.862 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0x80000 length 0x80000 00:16:32.862 nvme0n3 : 5.06 1772.22 6.92 0.00 0.00 71812.14 12703.90 68157.44 00:16:32.862 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0x0 length 0xbd0bd 00:16:32.862 nvme1n1 : 5.08 2280.75 8.91 0.00 0.00 55590.57 4889.99 64124.46 00:16:32.862 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:32.862 nvme1n1 : 5.07 2248.19 8.78 0.00 0.00 56428.95 6856.07 61301.37 00:16:32.862 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0x0 length 0xa0000 00:16:32.862 nvme2n1 : 5.07 1866.88 7.29 0.00 0.00 67957.54 8922.98 72997.02 00:16:32.862 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0xa0000 length 0xa0000 00:16:32.862 nvme2n1 : 5.07 1842.58 7.20 0.00 0.00 68668.25 8872.57 79853.10 00:16:32.862 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0x0 length 0x20000 00:16:32.862 nvme3n1 : 5.07 1818.46 7.10 0.00 0.00 69692.46 4915.20 72593.72 00:16:32.862 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:32.862 Verification LBA range: start 0x20000 length 0x20000 00:16:32.862 nvme3n1 : 5.07 1791.54 7.00 0.00 0.00 70477.22 4234.63 71787.13 00:16:32.862 [2024-11-20T17:46:56.402Z] =================================================================================================================== 00:16:32.862 [2024-11-20T17:46:56.402Z] Total : 22569.61 88.16 0.00 0.00 67565.75 4234.63 84289.38 00:16:33.808 00:16:33.808 real 0m6.762s 00:16:33.808 user 0m10.916s 00:16:33.808 sys 0m1.454s 00:16:33.808 17:46:57 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.808 17:46:57 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:33.808 ************************************ 00:16:33.808 END TEST bdev_verify 00:16:33.808 ************************************ 00:16:33.808 17:46:57 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:33.808 17:46:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:33.808 17:46:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.808 17:46:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:33.808 ************************************ 00:16:33.808 START TEST bdev_verify_big_io 00:16:33.808 ************************************ 00:16:33.808 17:46:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:33.808 [2024-11-20 17:46:57.282404] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:33.808 [2024-11-20 17:46:57.282540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73218 ] 00:16:34.071 [2024-11-20 17:46:57.446942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:34.071 [2024-11-20 17:46:57.576997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.071 [2024-11-20 17:46:57.577256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.643 Running I/O for 5 seconds... 00:16:40.464 912.00 IOPS, 57.00 MiB/s [2024-11-20T17:47:04.263Z] 2486.50 IOPS, 155.41 MiB/s [2024-11-20T17:47:04.263Z] 2823.67 IOPS, 176.48 MiB/s 00:16:40.723 Latency(us) 00:16:40.723 [2024-11-20T17:47:04.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.723 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0x0 length 0x8000 00:16:40.723 nvme0n1 : 5.85 123.05 7.69 0.00 0.00 1015020.38 13006.38 1000180.18 00:16:40.723 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0x8000 length 0x8000 00:16:40.723 nvme0n1 : 5.94 121.12 7.57 0.00 0.00 1019989.39 9376.69 1639004.95 00:16:40.723 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0x0 length 0x8000 00:16:40.723 nvme0n2 : 5.85 102.49 6.41 0.00 0.00 1153296.48 200842.63 1271196.75 00:16:40.723 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0x8000 length 0x8000 00:16:40.723 nvme0n2 : 5.94 115.82 7.24 0.00 0.00 1023972.74 91952.05 1051802.39 00:16:40.723 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0x0 length 0x8000 00:16:40.723 nvme0n3 : 5.81 110.17 6.89 0.00 0.00 1068405.68 96791.63 1238932.87 00:16:40.723 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0x8000 length 0x8000 00:16:40.723 nvme0n3 : 5.93 102.49 6.41 0.00 0.00 1129765.68 65334.35 1613193.85 00:16:40.723 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0x0 length 0xbd0b 00:16:40.723 nvme1n1 : 5.81 123.85 7.74 0.00 0.00 921674.60 12250.19 1277649.53 00:16:40.723 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:40.723 nvme1n1 : 5.94 118.59 7.41 0.00 0.00 965031.35 80659.69 2090699.22 00:16:40.723 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0x0 length 0xa000 00:16:40.723 nvme2n1 : 5.95 139.84 8.74 0.00 0.00 791159.49 48395.82 1858399.31 00:16:40.723 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0xa000 length 0xa000 00:16:40.723 nvme2n1 : 5.94 96.92 6.06 0.00 0.00 1143322.08 85499.27 1742249.35 00:16:40.723 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0x0 length 0x2000 00:16:40.723 nvme3n1 : 5.96 147.74 9.23 0.00 0.00 732197.41 3327.21 1148594.02 00:16:40.723 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:40.723 Verification LBA range: start 0x2000 length 0x2000 00:16:40.723 nvme3n1 : 5.95 172.15 10.76 0.00 0.00 626424.49 3806.13 1090519.04 00:16:40.723 [2024-11-20T17:47:04.263Z] =================================================================================================================== 00:16:40.723 [2024-11-20T17:47:04.263Z] Total : 1474.23 92.14 0.00 0.00 938697.63 3327.21 2090699.22 00:16:41.656 00:16:41.656 real 0m7.783s 00:16:41.656 user 0m14.298s 00:16:41.656 sys 0m0.435s 00:16:41.656 ************************************ 00:16:41.656 END TEST bdev_verify_big_io 00:16:41.656 17:47:04 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.656 17:47:04 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.656 ************************************ 00:16:41.656 17:47:05 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:41.656 17:47:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:41.656 17:47:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.656 17:47:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.656 ************************************ 00:16:41.656 START TEST bdev_write_zeroes 00:16:41.656 ************************************ 00:16:41.656 17:47:05 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:41.656 [2024-11-20 17:47:05.118781] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:41.656 [2024-11-20 17:47:05.118903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73329 ] 00:16:41.914 [2024-11-20 17:47:05.279762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.914 [2024-11-20 17:47:05.380600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.481 Running I/O for 1 seconds... 00:16:43.418 95552.00 IOPS, 373.25 MiB/s 00:16:43.418 Latency(us) 00:16:43.418 [2024-11-20T17:47:06.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.418 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.418 nvme0n1 : 1.02 15715.59 61.39 0.00 0.00 8136.06 4965.61 16938.54 00:16:43.418 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.418 nvme0n2 : 1.01 15663.66 61.19 0.00 0.00 8156.99 5394.12 17543.48 00:16:43.418 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.418 nvme0n3 : 1.01 15644.88 61.11 0.00 0.00 8160.99 5469.74 17543.48 00:16:43.418 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.418 nvme1n1 : 1.02 16724.02 65.33 0.00 0.00 7627.85 4637.93 13510.50 00:16:43.418 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.418 nvme2n1 : 1.02 15624.95 61.03 0.00 0.00 8159.38 5494.94 16333.59 00:16:43.418 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:43.418 nvme3n1 : 1.02 15607.41 60.97 0.00 0.00 8125.68 4234.63 16636.06 00:16:43.418 [2024-11-20T17:47:06.958Z] =================================================================================================================== 00:16:43.418 [2024-11-20T17:47:06.958Z] Total : 94980.52 371.02 0.00 0.00 8055.80 4234.63 17543.48 00:16:43.988 00:16:43.988 real 0m2.458s 00:16:43.988 user 0m1.851s 00:16:43.988 sys 0m0.436s 00:16:43.988 17:47:07 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.988 17:47:07 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:43.988 ************************************ 00:16:43.988 END TEST bdev_write_zeroes 00:16:43.988 ************************************ 00:16:44.248 17:47:07 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:44.248 17:47:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:44.248 17:47:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.248 17:47:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:44.248 ************************************ 00:16:44.248 START TEST bdev_json_nonenclosed 00:16:44.248 ************************************ 00:16:44.248 17:47:07 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:44.248 [2024-11-20 17:47:07.649075] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:44.248 [2024-11-20 17:47:07.649196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73381 ] 00:16:44.507 [2024-11-20 17:47:07.809923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.507 [2024-11-20 17:47:07.937146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.508 [2024-11-20 17:47:07.937250] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:44.508 [2024-11-20 17:47:07.937273] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:44.508 [2024-11-20 17:47:07.937284] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:44.766 00:16:44.766 real 0m0.547s 00:16:44.766 user 0m0.345s 00:16:44.766 sys 0m0.096s 00:16:44.766 17:47:08 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.766 17:47:08 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:44.766 ************************************ 00:16:44.766 END TEST bdev_json_nonenclosed 00:16:44.766 ************************************ 00:16:44.766 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:44.766 17:47:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:44.766 17:47:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.766 17:47:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:44.766 ************************************ 00:16:44.766 START TEST bdev_json_nonarray 00:16:44.766 ************************************ 00:16:44.766 17:47:08 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:44.766 [2024-11-20 17:47:08.276089] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:44.766 [2024-11-20 17:47:08.276231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73401 ] 00:16:45.028 [2024-11-20 17:47:08.438495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.028 [2024-11-20 17:47:08.543415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.028 [2024-11-20 17:47:08.543501] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:45.028 [2024-11-20 17:47:08.543518] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:45.028 [2024-11-20 17:47:08.543527] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:45.287 00:16:45.287 real 0m0.517s 00:16:45.287 user 0m0.306s 00:16:45.287 sys 0m0.105s 00:16:45.287 17:47:08 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.287 ************************************ 00:16:45.287 END TEST bdev_json_nonarray 00:16:45.287 ************************************ 00:16:45.287 17:47:08 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:45.287 17:47:08 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:45.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:51.117 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.117 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.683 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.683 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.942 00:16:51.942 real 0m56.794s 00:16:51.942 user 1m22.138s 00:16:51.942 sys 0m42.065s 00:16:51.942 17:47:15 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.942 ************************************ 00:16:51.942 END TEST blockdev_xnvme 00:16:51.942 ************************************ 00:16:51.942 17:47:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:51.942 17:47:15 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:51.942 17:47:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:51.942 17:47:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.942 17:47:15 -- common/autotest_common.sh@10 -- # set +x 00:16:51.942 ************************************ 00:16:51.942 START TEST ublk 00:16:51.942 ************************************ 00:16:51.942 17:47:15 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:51.942 * Looking for test storage... 00:16:51.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:51.942 17:47:15 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:51.942 17:47:15 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:51.942 17:47:15 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:52.203 17:47:15 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:52.203 17:47:15 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.203 17:47:15 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.203 17:47:15 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.203 17:47:15 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.203 17:47:15 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.203 17:47:15 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.203 17:47:15 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.203 17:47:15 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.203 17:47:15 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.203 17:47:15 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.203 17:47:15 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.203 17:47:15 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:52.203 17:47:15 ublk -- scripts/common.sh@345 -- # : 1 00:16:52.203 17:47:15 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.203 17:47:15 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.203 17:47:15 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:52.203 17:47:15 ublk -- scripts/common.sh@353 -- # local d=1 00:16:52.203 17:47:15 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.203 17:47:15 ublk -- scripts/common.sh@355 -- # echo 1 00:16:52.203 17:47:15 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.203 17:47:15 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:52.203 17:47:15 ublk -- scripts/common.sh@353 -- # local d=2 00:16:52.203 17:47:15 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.203 17:47:15 ublk -- scripts/common.sh@355 -- # echo 2 00:16:52.203 17:47:15 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.203 17:47:15 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.203 17:47:15 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.203 17:47:15 ublk -- scripts/common.sh@368 -- # return 0 00:16:52.203 17:47:15 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.203 17:47:15 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:52.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.203 --rc genhtml_branch_coverage=1 00:16:52.203 --rc genhtml_function_coverage=1 00:16:52.203 --rc genhtml_legend=1 00:16:52.203 --rc geninfo_all_blocks=1 00:16:52.203 --rc geninfo_unexecuted_blocks=1 00:16:52.203 00:16:52.203 ' 00:16:52.203 17:47:15 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:52.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.203 --rc genhtml_branch_coverage=1 00:16:52.203 --rc genhtml_function_coverage=1 00:16:52.203 --rc genhtml_legend=1 00:16:52.203 --rc geninfo_all_blocks=1 00:16:52.203 --rc geninfo_unexecuted_blocks=1 00:16:52.203 00:16:52.203 ' 00:16:52.203 17:47:15 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:52.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.203 --rc genhtml_branch_coverage=1 00:16:52.203 --rc genhtml_function_coverage=1 00:16:52.203 --rc genhtml_legend=1 00:16:52.203 --rc geninfo_all_blocks=1 00:16:52.203 --rc geninfo_unexecuted_blocks=1 00:16:52.203 00:16:52.203 ' 00:16:52.203 17:47:15 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:52.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.203 --rc genhtml_branch_coverage=1 00:16:52.203 --rc genhtml_function_coverage=1 00:16:52.203 --rc genhtml_legend=1 00:16:52.203 --rc geninfo_all_blocks=1 00:16:52.203 --rc geninfo_unexecuted_blocks=1 00:16:52.203 00:16:52.203 ' 00:16:52.203 17:47:15 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:52.203 17:47:15 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:52.203 17:47:15 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:52.203 17:47:15 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:52.203 17:47:15 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:52.203 17:47:15 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:52.203 17:47:15 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:52.203 17:47:15 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:52.203 17:47:15 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:52.204 17:47:15 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:52.204 17:47:15 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:52.204 17:47:15 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:52.204 17:47:15 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:52.204 17:47:15 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:52.204 17:47:15 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:52.204 17:47:15 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:52.204 17:47:15 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:52.204 17:47:15 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:52.204 17:47:15 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:52.204 17:47:15 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:52.204 17:47:15 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:52.204 17:47:15 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.204 17:47:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.204 ************************************ 00:16:52.204 START TEST test_save_ublk_config 00:16:52.204 ************************************ 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73706 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73706 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73706 ']' 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:52.204 17:47:15 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:52.204 [2024-11-20 17:47:15.600555] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:52.204 [2024-11-20 17:47:15.600677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73706 ] 00:16:52.462 [2024-11-20 17:47:15.760204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.462 [2024-11-20 17:47:15.859292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.028 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.028 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:53.028 17:47:16 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:53.028 17:47:16 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:53.028 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.028 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:53.028 [2024-11-20 17:47:16.470895] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:53.028 [2024-11-20 17:47:16.471673] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:53.028 malloc0 00:16:53.028 [2024-11-20 17:47:16.535022] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:53.028 [2024-11-20 17:47:16.535099] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:53.028 [2024-11-20 17:47:16.535109] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:53.028 [2024-11-20 17:47:16.535116] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:53.029 [2024-11-20 17:47:16.543958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:53.029 [2024-11-20 17:47:16.543977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:53.029 [2024-11-20 17:47:16.550896] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:53.029 [2024-11-20 17:47:16.550996] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:53.320 [2024-11-20 17:47:16.567897] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:53.320 0 00:16:53.320 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.320 17:47:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:53.320 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.320 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:53.598 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.598 17:47:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:53.598 "subsystems": [ 00:16:53.598 { 00:16:53.598 "subsystem": "fsdev", 00:16:53.598 "config": [ 00:16:53.598 { 00:16:53.598 "method": "fsdev_set_opts", 00:16:53.598 "params": { 00:16:53.598 "fsdev_io_pool_size": 65535, 00:16:53.598 "fsdev_io_cache_size": 256 00:16:53.598 } 00:16:53.598 } 00:16:53.598 ] 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "subsystem": "keyring", 00:16:53.598 "config": [] 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "subsystem": "iobuf", 00:16:53.598 "config": [ 00:16:53.598 { 00:16:53.598 "method": "iobuf_set_options", 00:16:53.598 "params": { 00:16:53.598 "small_pool_count": 8192, 00:16:53.598 "large_pool_count": 1024, 00:16:53.598 "small_bufsize": 8192, 00:16:53.598 "large_bufsize": 135168, 00:16:53.598 "enable_numa": false 00:16:53.598 } 00:16:53.598 } 00:16:53.598 ] 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "subsystem": "sock", 00:16:53.598 "config": [ 00:16:53.598 { 00:16:53.598 "method": "sock_set_default_impl", 00:16:53.598 "params": { 00:16:53.598 "impl_name": "posix" 00:16:53.598 } 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "method": "sock_impl_set_options", 00:16:53.598 "params": { 00:16:53.598 "impl_name": "ssl", 00:16:53.598 "recv_buf_size": 4096, 00:16:53.598 "send_buf_size": 4096, 00:16:53.598 "enable_recv_pipe": true, 00:16:53.598 "enable_quickack": false, 00:16:53.598 "enable_placement_id": 0, 00:16:53.598 "enable_zerocopy_send_server": true, 00:16:53.598 "enable_zerocopy_send_client": false, 00:16:53.598 "zerocopy_threshold": 0, 00:16:53.598 "tls_version": 0, 00:16:53.598 "enable_ktls": false 00:16:53.598 } 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "method": "sock_impl_set_options", 00:16:53.598 "params": { 00:16:53.598 "impl_name": "posix", 00:16:53.598 "recv_buf_size": 2097152, 00:16:53.598 "send_buf_size": 2097152, 00:16:53.598 "enable_recv_pipe": true, 00:16:53.598 "enable_quickack": false, 00:16:53.598 "enable_placement_id": 0, 00:16:53.598 "enable_zerocopy_send_server": true, 00:16:53.598 "enable_zerocopy_send_client": false, 00:16:53.598 "zerocopy_threshold": 0, 00:16:53.598 "tls_version": 0, 00:16:53.598 "enable_ktls": false 00:16:53.598 } 00:16:53.598 } 00:16:53.598 ] 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "subsystem": "vmd", 00:16:53.598 "config": [] 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "subsystem": "accel", 00:16:53.598 "config": [ 00:16:53.598 { 00:16:53.598 "method": "accel_set_options", 00:16:53.598 "params": { 00:16:53.598 "small_cache_size": 128, 00:16:53.598 "large_cache_size": 16, 00:16:53.598 "task_count": 2048, 00:16:53.598 "sequence_count": 2048, 00:16:53.598 "buf_count": 2048 00:16:53.598 } 00:16:53.598 } 00:16:53.598 ] 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "subsystem": "bdev", 00:16:53.598 "config": [ 00:16:53.598 { 00:16:53.598 "method": "bdev_set_options", 00:16:53.598 "params": { 00:16:53.598 "bdev_io_pool_size": 65535, 00:16:53.598 "bdev_io_cache_size": 256, 00:16:53.598 "bdev_auto_examine": true, 00:16:53.598 "iobuf_small_cache_size": 128, 00:16:53.598 "iobuf_large_cache_size": 16 00:16:53.598 } 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "method": "bdev_raid_set_options", 00:16:53.598 "params": { 00:16:53.598 "process_window_size_kb": 1024, 00:16:53.598 "process_max_bandwidth_mb_sec": 0 00:16:53.598 } 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "method": "bdev_iscsi_set_options", 00:16:53.598 "params": { 00:16:53.598 "timeout_sec": 30 00:16:53.598 } 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "method": "bdev_nvme_set_options", 00:16:53.598 "params": { 00:16:53.598 "action_on_timeout": "none", 00:16:53.598 "timeout_us": 0, 00:16:53.598 "timeout_admin_us": 0, 00:16:53.598 "keep_alive_timeout_ms": 10000, 00:16:53.598 "arbitration_burst": 0, 00:16:53.598 "low_priority_weight": 0, 00:16:53.598 "medium_priority_weight": 0, 00:16:53.598 "high_priority_weight": 0, 00:16:53.598 "nvme_adminq_poll_period_us": 10000, 00:16:53.598 "nvme_ioq_poll_period_us": 0, 00:16:53.598 "io_queue_requests": 0, 00:16:53.598 "delay_cmd_submit": true, 00:16:53.598 "transport_retry_count": 4, 00:16:53.598 "bdev_retry_count": 3, 00:16:53.598 "transport_ack_timeout": 0, 00:16:53.598 "ctrlr_loss_timeout_sec": 0, 00:16:53.598 "reconnect_delay_sec": 0, 00:16:53.598 "fast_io_fail_timeout_sec": 0, 00:16:53.598 "disable_auto_failback": false, 00:16:53.598 "generate_uuids": false, 00:16:53.598 "transport_tos": 0, 00:16:53.598 "nvme_error_stat": false, 00:16:53.598 "rdma_srq_size": 0, 00:16:53.598 "io_path_stat": false, 00:16:53.598 "allow_accel_sequence": false, 00:16:53.598 "rdma_max_cq_size": 0, 00:16:53.598 "rdma_cm_event_timeout_ms": 0, 00:16:53.598 "dhchap_digests": [ 00:16:53.598 "sha256", 00:16:53.598 "sha384", 00:16:53.598 "sha512" 00:16:53.598 ], 00:16:53.598 "dhchap_dhgroups": [ 00:16:53.598 "null", 00:16:53.598 "ffdhe2048", 00:16:53.598 "ffdhe3072", 00:16:53.598 "ffdhe4096", 00:16:53.598 "ffdhe6144", 00:16:53.598 "ffdhe8192" 00:16:53.598 ] 00:16:53.598 } 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "method": "bdev_nvme_set_hotplug", 00:16:53.598 "params": { 00:16:53.598 "period_us": 100000, 00:16:53.598 "enable": false 00:16:53.598 } 00:16:53.598 }, 00:16:53.598 { 00:16:53.599 "method": "bdev_malloc_create", 00:16:53.599 "params": { 00:16:53.599 "name": "malloc0", 00:16:53.599 "num_blocks": 8192, 00:16:53.599 "block_size": 4096, 00:16:53.599 "physical_block_size": 4096, 00:16:53.599 "uuid": "3045b417-e907-4e1d-87af-0139ff2de575", 00:16:53.599 "optimal_io_boundary": 0, 00:16:53.599 "md_size": 0, 00:16:53.599 "dif_type": 0, 00:16:53.599 "dif_is_head_of_md": false, 00:16:53.599 "dif_pi_format": 0 00:16:53.599 } 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "method": "bdev_wait_for_examine" 00:16:53.599 } 00:16:53.599 ] 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "subsystem": "scsi", 00:16:53.599 "config": null 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "subsystem": "scheduler", 00:16:53.599 "config": [ 00:16:53.599 { 00:16:53.599 "method": "framework_set_scheduler", 00:16:53.599 "params": { 00:16:53.599 "name": "static" 00:16:53.599 } 00:16:53.599 } 00:16:53.599 ] 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "subsystem": "vhost_scsi", 00:16:53.599 "config": [] 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "subsystem": "vhost_blk", 00:16:53.599 "config": [] 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "subsystem": "ublk", 00:16:53.599 "config": [ 00:16:53.599 { 00:16:53.599 "method": "ublk_create_target", 00:16:53.599 "params": { 00:16:53.599 "cpumask": "1" 00:16:53.599 } 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "method": "ublk_start_disk", 00:16:53.599 "params": { 00:16:53.599 "bdev_name": "malloc0", 00:16:53.599 "ublk_id": 0, 00:16:53.599 "num_queues": 1, 00:16:53.599 "queue_depth": 128 00:16:53.599 } 00:16:53.599 } 00:16:53.599 ] 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "subsystem": "nbd", 00:16:53.599 "config": [] 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "subsystem": "nvmf", 00:16:53.599 "config": [ 00:16:53.599 { 00:16:53.599 "method": "nvmf_set_config", 00:16:53.599 "params": { 00:16:53.599 "discovery_filter": "match_any", 00:16:53.599 "admin_cmd_passthru": { 00:16:53.599 "identify_ctrlr": false 00:16:53.599 }, 00:16:53.599 "dhchap_digests": [ 00:16:53.599 "sha256", 00:16:53.599 "sha384", 00:16:53.599 "sha512" 00:16:53.599 ], 00:16:53.599 "dhchap_dhgroups": [ 00:16:53.599 "null", 00:16:53.599 "ffdhe2048", 00:16:53.599 "ffdhe3072", 00:16:53.599 "ffdhe4096", 00:16:53.599 "ffdhe6144", 00:16:53.599 "ffdhe8192" 00:16:53.599 ] 00:16:53.599 } 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "method": "nvmf_set_max_subsystems", 00:16:53.599 "params": { 00:16:53.599 "max_subsystems": 1024 00:16:53.599 } 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "method": "nvmf_set_crdt", 00:16:53.599 "params": { 00:16:53.599 "crdt1": 0, 00:16:53.599 "crdt2": 0, 00:16:53.599 "crdt3": 0 00:16:53.599 } 00:16:53.599 } 00:16:53.599 ] 00:16:53.599 }, 00:16:53.599 { 00:16:53.599 "subsystem": "iscsi", 00:16:53.599 "config": [ 00:16:53.599 { 00:16:53.599 "method": "iscsi_set_options", 00:16:53.599 "params": { 00:16:53.599 "node_base": "iqn.2016-06.io.spdk", 00:16:53.599 "max_sessions": 128, 00:16:53.599 "max_connections_per_session": 2, 00:16:53.599 "max_queue_depth": 64, 00:16:53.599 "default_time2wait": 2, 00:16:53.599 "default_time2retain": 20, 00:16:53.599 "first_burst_length": 8192, 00:16:53.599 "immediate_data": true, 00:16:53.599 "allow_duplicated_isid": false, 00:16:53.599 "error_recovery_level": 0, 00:16:53.599 "nop_timeout": 60, 00:16:53.599 "nop_in_interval": 30, 00:16:53.599 "disable_chap": false, 00:16:53.599 "require_chap": false, 00:16:53.599 "mutual_chap": false, 00:16:53.599 "chap_group": 0, 00:16:53.599 "max_large_datain_per_connection": 64, 00:16:53.599 "max_r2t_per_connection": 4, 00:16:53.599 "pdu_pool_size": 36864, 00:16:53.599 "immediate_data_pool_size": 16384, 00:16:53.599 "data_out_pool_size": 2048 00:16:53.599 } 00:16:53.599 } 00:16:53.599 ] 00:16:53.599 } 00:16:53.599 ] 00:16:53.599 }' 00:16:53.599 17:47:16 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73706 00:16:53.599 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73706 ']' 00:16:53.599 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73706 00:16:53.599 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:53.599 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.599 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73706 00:16:53.599 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.599 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.599 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73706' 00:16:53.599 killing process with pid 73706 00:16:53.599 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73706 00:16:53.599 17:47:16 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73706 00:16:54.533 [2024-11-20 17:47:17.927462] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:54.533 [2024-11-20 17:47:17.966920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:54.533 [2024-11-20 17:47:17.967038] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:54.533 [2024-11-20 17:47:17.974902] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:54.533 [2024-11-20 17:47:17.974964] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:54.533 [2024-11-20 17:47:17.974977] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:54.533 [2024-11-20 17:47:17.974999] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:54.533 [2024-11-20 17:47:17.975153] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:55.907 17:47:19 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73755 00:16:55.907 17:47:19 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73755 00:16:55.907 17:47:19 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73755 ']' 00:16:55.907 17:47:19 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.907 17:47:19 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.907 17:47:19 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.907 17:47:19 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.907 17:47:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:55.907 17:47:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:55.907 17:47:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:55.907 "subsystems": [ 00:16:55.907 { 00:16:55.907 "subsystem": "fsdev", 00:16:55.907 "config": [ 00:16:55.907 { 00:16:55.907 "method": "fsdev_set_opts", 00:16:55.907 "params": { 00:16:55.907 "fsdev_io_pool_size": 65535, 00:16:55.907 "fsdev_io_cache_size": 256 00:16:55.907 } 00:16:55.907 } 00:16:55.907 ] 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "subsystem": "keyring", 00:16:55.907 "config": [] 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "subsystem": "iobuf", 00:16:55.907 "config": [ 00:16:55.907 { 00:16:55.907 "method": "iobuf_set_options", 00:16:55.907 "params": { 00:16:55.907 "small_pool_count": 8192, 00:16:55.907 "large_pool_count": 1024, 00:16:55.907 "small_bufsize": 8192, 00:16:55.907 "large_bufsize": 135168, 00:16:55.907 "enable_numa": false 00:16:55.907 } 00:16:55.907 } 00:16:55.907 ] 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "subsystem": "sock", 00:16:55.907 "config": [ 00:16:55.907 { 00:16:55.907 "method": "sock_set_default_impl", 00:16:55.907 "params": { 00:16:55.907 "impl_name": "posix" 00:16:55.907 } 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "method": "sock_impl_set_options", 00:16:55.907 "params": { 00:16:55.907 "impl_name": "ssl", 00:16:55.907 "recv_buf_size": 4096, 00:16:55.907 "send_buf_size": 4096, 00:16:55.907 "enable_recv_pipe": true, 00:16:55.907 "enable_quickack": false, 00:16:55.907 "enable_placement_id": 0, 00:16:55.907 "enable_zerocopy_send_server": true, 00:16:55.907 "enable_zerocopy_send_client": false, 00:16:55.907 "zerocopy_threshold": 0, 00:16:55.907 "tls_version": 0, 00:16:55.907 "enable_ktls": false 00:16:55.907 } 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "method": "sock_impl_set_options", 00:16:55.907 "params": { 00:16:55.907 "impl_name": "posix", 00:16:55.907 "recv_buf_size": 2097152, 00:16:55.907 "send_buf_size": 2097152, 00:16:55.907 "enable_recv_pipe": true, 00:16:55.907 "enable_quickack": false, 00:16:55.907 "enable_placement_id": 0, 00:16:55.907 "enable_zerocopy_send_server": true, 00:16:55.907 "enable_zerocopy_send_client": false, 00:16:55.907 "zerocopy_threshold": 0, 00:16:55.907 "tls_version": 0, 00:16:55.907 "enable_ktls": false 00:16:55.907 } 00:16:55.907 } 00:16:55.907 ] 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "subsystem": "vmd", 00:16:55.907 "config": [] 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "subsystem": "accel", 00:16:55.907 "config": [ 00:16:55.907 { 00:16:55.907 "method": "accel_set_options", 00:16:55.907 "params": { 00:16:55.907 "small_cache_size": 128, 00:16:55.907 "large_cache_size": 16, 00:16:55.907 "task_count": 2048, 00:16:55.907 "sequence_count": 2048, 00:16:55.907 "buf_count": 2048 00:16:55.907 } 00:16:55.907 } 00:16:55.907 ] 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "subsystem": "bdev", 00:16:55.907 "config": [ 00:16:55.907 { 00:16:55.907 "method": "bdev_set_options", 00:16:55.907 "params": { 00:16:55.907 "bdev_io_pool_size": 65535, 00:16:55.907 "bdev_io_cache_size": 256, 00:16:55.907 "bdev_auto_examine": true, 00:16:55.907 "iobuf_small_cache_size": 128, 00:16:55.907 "iobuf_large_cache_size": 16 00:16:55.907 } 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "method": "bdev_raid_set_options", 00:16:55.907 "params": { 00:16:55.907 "process_window_size_kb": 1024, 00:16:55.907 "process_max_bandwidth_mb_sec": 0 00:16:55.907 } 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "method": "bdev_iscsi_set_options", 00:16:55.907 "params": { 00:16:55.907 "timeout_sec": 30 00:16:55.907 } 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "method": "bdev_nvme_set_options", 00:16:55.907 "params": { 00:16:55.907 "action_on_timeout": "none", 00:16:55.907 "timeout_us": 0, 00:16:55.907 "timeout_admin_us": 0, 00:16:55.907 "keep_alive_timeout_ms": 10000, 00:16:55.907 "arbitration_burst": 0, 00:16:55.907 "low_priority_weight": 0, 00:16:55.907 "medium_priority_weight": 0, 00:16:55.907 "high_priority_weight": 0, 00:16:55.907 "nvme_adminq_poll_period_us": 10000, 00:16:55.907 "nvme_ioq_poll_period_us": 0, 00:16:55.907 "io_queue_requests": 0, 00:16:55.907 "delay_cmd_submit": true, 00:16:55.907 "transport_retry_count": 4, 00:16:55.907 "bdev_retry_count": 3, 00:16:55.907 "transport_ack_timeout": 0, 00:16:55.907 "ctrlr_loss_timeout_sec": 0, 00:16:55.907 "reconnect_delay_sec": 0, 00:16:55.907 "fast_io_fail_timeout_sec": 0, 00:16:55.907 "disable_auto_failback": false, 00:16:55.907 "generate_uuids": false, 00:16:55.907 "transport_tos": 0, 00:16:55.907 "nvme_error_stat": false, 00:16:55.907 "rdma_srq_size": 0, 00:16:55.907 "io_path_stat": false, 00:16:55.907 "allow_accel_sequence": false, 00:16:55.907 "rdma_max_cq_size": 0, 00:16:55.907 "rdma_cm_event_timeout_ms": 0, 00:16:55.907 "dhchap_digests": [ 00:16:55.907 "sha256", 00:16:55.907 "sha384", 00:16:55.907 "sha512" 00:16:55.907 ], 00:16:55.907 "dhchap_dhgroups": [ 00:16:55.907 "null", 00:16:55.907 "ffdhe2048", 00:16:55.907 "ffdhe3072", 00:16:55.907 "ffdhe4096", 00:16:55.907 "ffdhe6144", 00:16:55.907 "ffdhe8192" 00:16:55.907 ] 00:16:55.907 } 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "method": "bdev_nvme_set_hotplug", 00:16:55.907 "params": { 00:16:55.907 "period_us": 100000, 00:16:55.907 "enable": false 00:16:55.907 } 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "method": "bdev_malloc_create", 00:16:55.907 "params": { 00:16:55.907 "name": "malloc0", 00:16:55.907 "num_blocks": 8192, 00:16:55.907 "block_size": 4096, 00:16:55.907 "physical_block_size": 4096, 00:16:55.907 "uuid": "3045b417-e907-4e1d-87af-0139ff2de575", 00:16:55.907 "optimal_io_boundary": 0, 00:16:55.907 "md_size": 0, 00:16:55.907 "dif_type": 0, 00:16:55.907 "dif_is_head_of_md": false, 00:16:55.907 "dif_pi_format": 0 00:16:55.907 } 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "method": "bdev_wait_for_examine" 00:16:55.907 } 00:16:55.907 ] 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "subsystem": "scsi", 00:16:55.907 "config": null 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "subsystem": "scheduler", 00:16:55.907 "config": [ 00:16:55.907 { 00:16:55.907 "method": "framework_set_scheduler", 00:16:55.907 "params": { 00:16:55.907 "name": "static" 00:16:55.907 } 00:16:55.907 } 00:16:55.907 ] 00:16:55.907 }, 00:16:55.907 { 00:16:55.907 "subsystem": "vhost_scsi", 00:16:55.908 "config": [] 00:16:55.908 }, 00:16:55.908 { 00:16:55.908 "subsystem": "vhost_blk", 00:16:55.908 "config": [] 00:16:55.908 }, 00:16:55.908 { 00:16:55.908 "subsystem": "ublk", 00:16:55.908 "config": [ 00:16:55.908 { 00:16:55.908 "method": "ublk_create_target", 00:16:55.908 "params": { 00:16:55.908 "cpumask": "1" 00:16:55.908 } 00:16:55.908 }, 00:16:55.908 { 00:16:55.908 "method": "ublk_start_disk", 00:16:55.908 "params": { 00:16:55.908 "bdev_name": "malloc0", 00:16:55.908 "ublk_id": 0, 00:16:55.908 "num_queues": 1, 00:16:55.908 "queue_depth": 128 00:16:55.908 } 00:16:55.908 } 00:16:55.908 ] 00:16:55.908 }, 00:16:55.908 { 00:16:55.908 "subsystem": "nbd", 00:16:55.908 "config": [] 00:16:55.908 }, 00:16:55.908 { 00:16:55.908 "subsystem": "nvmf", 00:16:55.908 "config": [ 00:16:55.908 { 00:16:55.908 "method": "nvmf_set_config", 00:16:55.908 "params": { 00:16:55.908 "discovery_filter": "match_any", 00:16:55.908 "admin_cmd_passthru": { 00:16:55.908 "identify_ctrlr": false 00:16:55.908 }, 00:16:55.908 "dhchap_digests": [ 00:16:55.908 "sha256", 00:16:55.908 "sha384", 00:16:55.908 "sha512" 00:16:55.908 ], 00:16:55.908 "dhchap_dhgroups": [ 00:16:55.908 "null", 00:16:55.908 "ffdhe2048", 00:16:55.908 "ffdhe3072", 00:16:55.908 "ffdhe4096", 00:16:55.908 "ffdhe6144", 00:16:55.908 "ffdhe8192" 00:16:55.908 ] 00:16:55.908 } 00:16:55.908 }, 00:16:55.908 { 00:16:55.908 "method": "nvmf_set_max_subsystems", 00:16:55.908 "params": { 00:16:55.908 "max_subsystems": 1024 00:16:55.908 } 00:16:55.908 }, 00:16:55.908 { 00:16:55.908 "method": "nvmf_set_crdt", 00:16:55.908 "params": { 00:16:55.908 "crdt1": 0, 00:16:55.908 "crdt2": 0, 00:16:55.908 "crdt3": 0 00:16:55.908 } 00:16:55.908 } 00:16:55.908 ] 00:16:55.908 }, 00:16:55.908 { 00:16:55.908 "subsystem": "iscsi", 00:16:55.908 "config": [ 00:16:55.908 { 00:16:55.908 "method": "iscsi_set_options", 00:16:55.908 "params": { 00:16:55.908 "node_base": "iqn.2016-06.io.spdk", 00:16:55.908 "max_sessions": 128, 00:16:55.908 "max_connections_per_session": 2, 00:16:55.908 "max_queue_depth": 64, 00:16:55.908 "default_time2wait": 2, 00:16:55.908 "default_time2retain": 20, 00:16:55.908 "first_burst_length": 8192, 00:16:55.908 "immediate_data": true, 00:16:55.908 "allow_duplicated_isid": false, 00:16:55.908 "error_recovery_level": 0, 00:16:55.908 "nop_timeout": 60, 00:16:55.908 "nop_in_interval": 30, 00:16:55.908 "disable_chap": false, 00:16:55.908 "require_chap": false, 00:16:55.908 "mutual_chap": false, 00:16:55.908 "chap_group": 0, 00:16:55.908 "max_large_datain_per_connection": 64, 00:16:55.908 "max_r2t_per_connection": 4, 00:16:55.908 "pdu_pool_size": 36864, 00:16:55.908 "immediate_data_pool_size": 16384, 00:16:55.908 "data_out_pool_size": 2048 00:16:55.908 } 00:16:55.908 } 00:16:55.908 ] 00:16:55.908 } 00:16:55.908 ] 00:16:55.908 }' 00:16:55.908 [2024-11-20 17:47:19.426946] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:55.908 [2024-11-20 17:47:19.427065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73755 ] 00:16:56.166 [2024-11-20 17:47:19.587627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.166 [2024-11-20 17:47:19.689037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.101 [2024-11-20 17:47:20.451887] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:57.101 [2024-11-20 17:47:20.452698] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:57.101 [2024-11-20 17:47:20.459998] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:57.101 [2024-11-20 17:47:20.460065] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:57.101 [2024-11-20 17:47:20.460074] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:57.101 [2024-11-20 17:47:20.460081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:57.101 [2024-11-20 17:47:20.468954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:57.101 [2024-11-20 17:47:20.468971] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:57.101 [2024-11-20 17:47:20.475898] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:57.101 [2024-11-20 17:47:20.475982] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:57.101 [2024-11-20 17:47:20.492895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73755 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73755 ']' 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73755 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73755 00:16:57.101 killing process with pid 73755 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.101 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73755' 00:16:57.102 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73755 00:16:57.102 17:47:20 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73755 00:16:58.477 [2024-11-20 17:47:21.744799] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:58.477 [2024-11-20 17:47:21.782978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:58.477 [2024-11-20 17:47:21.783099] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:58.477 [2024-11-20 17:47:21.790907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:58.477 [2024-11-20 17:47:21.790956] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:58.477 [2024-11-20 17:47:21.790964] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:58.477 [2024-11-20 17:47:21.790989] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:58.477 [2024-11-20 17:47:21.791125] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:59.852 17:47:23 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:59.852 00:16:59.852 real 0m7.651s 00:16:59.852 user 0m5.462s 00:16:59.852 sys 0m2.802s 00:16:59.852 17:47:23 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.852 ************************************ 00:16:59.852 END TEST test_save_ublk_config 00:16:59.852 ************************************ 00:16:59.852 17:47:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:59.852 17:47:23 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73833 00:16:59.852 17:47:23 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.852 17:47:23 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73833 00:16:59.852 17:47:23 ublk -- common/autotest_common.sh@835 -- # '[' -z 73833 ']' 00:16:59.852 17:47:23 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.852 17:47:23 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.852 17:47:23 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.852 17:47:23 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.852 17:47:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.852 17:47:23 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:59.852 [2024-11-20 17:47:23.298676] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:16:59.852 [2024-11-20 17:47:23.298797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73833 ] 00:17:00.110 [2024-11-20 17:47:23.453052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:00.110 [2024-11-20 17:47:23.550581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.110 [2024-11-20 17:47:23.550673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.677 17:47:24 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.677 17:47:24 ublk -- common/autotest_common.sh@868 -- # return 0 00:17:00.677 17:47:24 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:00.677 17:47:24 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:00.677 17:47:24 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.677 17:47:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.677 ************************************ 00:17:00.677 START TEST test_create_ublk 00:17:00.677 ************************************ 00:17:00.677 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:17:00.677 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:00.677 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.677 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.677 [2024-11-20 17:47:24.165891] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:00.677 [2024-11-20 17:47:24.167767] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:00.677 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.677 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:00.677 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:00.677 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.677 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.936 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.936 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:00.936 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:00.936 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.936 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.936 [2024-11-20 17:47:24.367030] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:00.936 [2024-11-20 17:47:24.367402] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:00.936 [2024-11-20 17:47:24.367419] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:00.936 [2024-11-20 17:47:24.367426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:00.936 [2024-11-20 17:47:24.376092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:00.936 [2024-11-20 17:47:24.376112] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:00.936 [2024-11-20 17:47:24.382907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:00.936 [2024-11-20 17:47:24.383510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:00.936 [2024-11-20 17:47:24.398919] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:00.936 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.936 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:00.936 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:00.936 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:00.936 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.936 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.936 17:47:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.936 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:00.936 { 00:17:00.936 "ublk_device": "/dev/ublkb0", 00:17:00.936 "id": 0, 00:17:00.936 "queue_depth": 512, 00:17:00.936 "num_queues": 4, 00:17:00.936 "bdev_name": "Malloc0" 00:17:00.936 } 00:17:00.936 ]' 00:17:00.936 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:00.936 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:00.936 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:01.196 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:01.196 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:01.196 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:01.196 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:01.196 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:01.196 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:01.196 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:01.196 17:47:24 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:01.196 17:47:24 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:01.196 17:47:24 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:01.196 17:47:24 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:01.196 17:47:24 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:01.196 17:47:24 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:01.196 17:47:24 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:01.196 17:47:24 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:01.196 17:47:24 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:01.196 17:47:24 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:01.196 17:47:24 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:01.196 17:47:24 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:01.196 fio: verification read phase will never start because write phase uses all of runtime 00:17:01.196 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:01.196 fio-3.35 00:17:01.196 Starting 1 process 00:17:13.423 00:17:13.423 fio_test: (groupid=0, jobs=1): err= 0: pid=73873: Wed Nov 20 17:47:34 2024 00:17:13.423 write: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(621MiB/10001msec); 0 zone resets 00:17:13.423 clat (usec): min=34, max=3787, avg=61.99, stdev=95.29 00:17:13.423 lat (usec): min=34, max=3787, avg=62.53, stdev=95.45 00:17:13.423 clat percentiles (usec): 00:17:13.423 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 45], 00:17:13.423 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 52], 60.00th=[ 56], 00:17:13.423 | 70.00th=[ 60], 80.00th=[ 67], 90.00th=[ 75], 95.00th=[ 81], 00:17:13.423 | 99.00th=[ 289], 99.50th=[ 326], 99.90th=[ 1795], 99.95th=[ 2704], 00:17:13.423 | 99.99th=[ 3589] 00:17:13.423 bw ( KiB/s): min=41920, max=82208, per=99.27%, avg=63089.68, stdev=14557.45, samples=19 00:17:13.423 iops : min=10480, max=20552, avg=15772.42, stdev=3639.36, samples=19 00:17:13.423 lat (usec) : 50=42.80%, 100=54.97%, 250=0.76%, 500=1.31%, 750=0.02% 00:17:13.423 lat (usec) : 1000=0.01% 00:17:13.423 lat (msec) : 2=0.05%, 4=0.09% 00:17:13.423 cpu : usr=3.17%, sys=13.99%, ctx=158884, majf=0, minf=796 00:17:13.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:13.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.423 issued rwts: total=0,158895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.424 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:13.424 00:17:13.424 Run status group 0 (all jobs): 00:17:13.424 WRITE: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=621MiB (651MB), run=10001-10001msec 00:17:13.424 00:17:13.424 Disk stats (read/write): 00:17:13.424 ublkb0: ios=0/156912, merge=0/0, ticks=0/7766, in_queue=7767, util=99.09% 00:17:13.424 17:47:34 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.424 [2024-11-20 17:47:34.812685] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.424 [2024-11-20 17:47:34.839364] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.424 [2024-11-20 17:47:34.840266] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.424 [2024-11-20 17:47:34.845892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.424 [2024-11-20 17:47:34.846128] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:13.424 [2024-11-20 17:47:34.846142] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.424 17:47:34 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.424 [2024-11-20 17:47:34.861945] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:13.424 request: 00:17:13.424 { 00:17:13.424 "ublk_id": 0, 00:17:13.424 "method": "ublk_stop_disk", 00:17:13.424 "req_id": 1 00:17:13.424 } 00:17:13.424 Got JSON-RPC error response 00:17:13.424 response: 00:17:13.424 { 00:17:13.424 "code": -19, 00:17:13.424 "message": "No such device" 00:17:13.424 } 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.424 17:47:34 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.424 [2024-11-20 17:47:34.877946] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:13.424 [2024-11-20 17:47:34.881507] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:13.424 [2024-11-20 17:47:34.881539] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.424 17:47:34 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.424 17:47:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.424 17:47:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.424 17:47:35 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:13.424 17:47:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:13.424 17:47:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.424 17:47:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.424 17:47:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.424 17:47:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:13.424 17:47:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:13.424 17:47:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:13.424 17:47:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:13.424 17:47:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.424 17:47:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.424 17:47:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.424 17:47:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:13.424 17:47:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:13.424 17:47:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:13.424 00:17:13.424 real 0m11.187s 00:17:13.424 user 0m0.619s 00:17:13.424 sys 0m1.481s 00:17:13.424 17:47:35 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.424 ************************************ 00:17:13.424 17:47:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.424 END TEST test_create_ublk 00:17:13.424 ************************************ 00:17:13.424 17:47:35 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:13.424 17:47:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:13.424 17:47:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.424 17:47:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.424 ************************************ 00:17:13.424 START TEST test_create_multi_ublk 00:17:13.424 ************************************ 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.424 [2024-11-20 17:47:35.385884] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:13.424 [2024-11-20 17:47:35.387445] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.424 [2024-11-20 17:47:35.601992] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:13.424 [2024-11-20 17:47:35.602294] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:13.424 [2024-11-20 17:47:35.602306] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:13.424 [2024-11-20 17:47:35.602314] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:13.424 [2024-11-20 17:47:35.621892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:13.424 [2024-11-20 17:47:35.621912] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:13.424 [2024-11-20 17:47:35.633895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:13.424 [2024-11-20 17:47:35.634400] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:13.424 [2024-11-20 17:47:35.641929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.424 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.425 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.425 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:13.425 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:13.425 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.425 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.425 [2024-11-20 17:47:35.867983] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:13.425 [2024-11-20 17:47:35.868280] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:13.425 [2024-11-20 17:47:35.868294] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:13.425 [2024-11-20 17:47:35.868300] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:13.425 [2024-11-20 17:47:35.875904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:13.425 [2024-11-20 17:47:35.875928] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:13.425 [2024-11-20 17:47:35.883902] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:13.425 [2024-11-20 17:47:35.884405] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:13.425 [2024-11-20 17:47:35.892925] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:13.425 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.425 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:13.425 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.425 17:47:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:13.425 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.425 17:47:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.425 [2024-11-20 17:47:36.059978] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:13.425 [2024-11-20 17:47:36.060284] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:13.425 [2024-11-20 17:47:36.060296] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:13.425 [2024-11-20 17:47:36.060302] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:13.425 [2024-11-20 17:47:36.067908] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:13.425 [2024-11-20 17:47:36.067929] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:13.425 [2024-11-20 17:47:36.075890] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:13.425 [2024-11-20 17:47:36.076396] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:13.425 [2024-11-20 17:47:36.079740] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.425 [2024-11-20 17:47:36.247995] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:13.425 [2024-11-20 17:47:36.248295] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:13.425 [2024-11-20 17:47:36.248308] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:13.425 [2024-11-20 17:47:36.248313] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:13.425 [2024-11-20 17:47:36.257063] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:13.425 [2024-11-20 17:47:36.257079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:13.425 [2024-11-20 17:47:36.263890] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:13.425 [2024-11-20 17:47:36.264391] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:13.425 [2024-11-20 17:47:36.267714] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:13.425 { 00:17:13.425 "ublk_device": "/dev/ublkb0", 00:17:13.425 "id": 0, 00:17:13.425 "queue_depth": 512, 00:17:13.425 "num_queues": 4, 00:17:13.425 "bdev_name": "Malloc0" 00:17:13.425 }, 00:17:13.425 { 00:17:13.425 "ublk_device": "/dev/ublkb1", 00:17:13.425 "id": 1, 00:17:13.425 "queue_depth": 512, 00:17:13.425 "num_queues": 4, 00:17:13.425 "bdev_name": "Malloc1" 00:17:13.425 }, 00:17:13.425 { 00:17:13.425 "ublk_device": "/dev/ublkb2", 00:17:13.425 "id": 2, 00:17:13.425 "queue_depth": 512, 00:17:13.425 "num_queues": 4, 00:17:13.425 "bdev_name": "Malloc2" 00:17:13.425 }, 00:17:13.425 { 00:17:13.425 "ublk_device": "/dev/ublkb3", 00:17:13.425 "id": 3, 00:17:13.425 "queue_depth": 512, 00:17:13.425 "num_queues": 4, 00:17:13.425 "bdev_name": "Malloc3" 00:17:13.425 } 00:17:13.425 ]' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:13.425 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.426 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.426 [2024-11-20 17:47:36.923972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.684 [2024-11-20 17:47:36.961889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.684 [2024-11-20 17:47:36.962634] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.684 [2024-11-20 17:47:36.971888] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.684 [2024-11-20 17:47:36.972144] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:13.684 [2024-11-20 17:47:36.972159] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:13.684 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.684 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.684 17:47:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:13.684 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.684 17:47:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.684 [2024-11-20 17:47:36.979943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.684 [2024-11-20 17:47:37.010927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.684 [2024-11-20 17:47:37.011586] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.684 [2024-11-20 17:47:37.015887] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.684 [2024-11-20 17:47:37.016135] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:13.684 [2024-11-20 17:47:37.016148] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:13.684 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.684 17:47:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.684 17:47:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:13.684 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.684 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.684 [2024-11-20 17:47:37.023979] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.684 [2024-11-20 17:47:37.053342] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.684 [2024-11-20 17:47:37.054299] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.684 [2024-11-20 17:47:37.063901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.685 [2024-11-20 17:47:37.064116] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:13.685 [2024-11-20 17:47:37.064128] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:13.685 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.685 17:47:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.685 17:47:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:13.685 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.685 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.685 [2024-11-20 17:47:37.079953] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.685 [2024-11-20 17:47:37.113350] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.685 [2024-11-20 17:47:37.114267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.685 [2024-11-20 17:47:37.116503] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.685 [2024-11-20 17:47:37.116727] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:13.685 [2024-11-20 17:47:37.116739] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:13.685 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.685 17:47:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:13.942 [2024-11-20 17:47:37.310948] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:13.942 [2024-11-20 17:47:37.314590] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:13.942 [2024-11-20 17:47:37.314619] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:13.942 17:47:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:13.942 17:47:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.943 17:47:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:13.943 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.943 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.200 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.200 17:47:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:14.200 17:47:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:14.200 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.200 17:47:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.765 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.765 17:47:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:14.765 17:47:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:14.766 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.766 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.766 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.766 17:47:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:14.766 17:47:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:14.766 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.766 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:15.023 00:17:15.023 real 0m3.147s 00:17:15.023 user 0m0.791s 00:17:15.023 sys 0m0.151s 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.023 17:47:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.023 ************************************ 00:17:15.023 END TEST test_create_multi_ublk 00:17:15.023 ************************************ 00:17:15.023 17:47:38 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:15.023 17:47:38 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:15.023 17:47:38 ublk -- ublk/ublk.sh@130 -- # killprocess 73833 00:17:15.023 17:47:38 ublk -- common/autotest_common.sh@954 -- # '[' -z 73833 ']' 00:17:15.023 17:47:38 ublk -- common/autotest_common.sh@958 -- # kill -0 73833 00:17:15.023 17:47:38 ublk -- common/autotest_common.sh@959 -- # uname 00:17:15.023 17:47:38 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.023 17:47:38 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73833 00:17:15.283 17:47:38 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.283 17:47:38 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.283 killing process with pid 73833 00:17:15.283 17:47:38 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73833' 00:17:15.283 17:47:38 ublk -- common/autotest_common.sh@973 -- # kill 73833 00:17:15.283 17:47:38 ublk -- common/autotest_common.sh@978 -- # wait 73833 00:17:15.850 [2024-11-20 17:47:39.119350] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:15.850 [2024-11-20 17:47:39.119395] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:16.417 00:17:16.417 real 0m24.445s 00:17:16.417 user 0m35.156s 00:17:16.417 sys 0m9.181s 00:17:16.417 17:47:39 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.417 ************************************ 00:17:16.417 17:47:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.417 END TEST ublk 00:17:16.417 ************************************ 00:17:16.417 17:47:39 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:16.417 17:47:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.417 17:47:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.417 17:47:39 -- common/autotest_common.sh@10 -- # set +x 00:17:16.417 ************************************ 00:17:16.417 START TEST ublk_recovery 00:17:16.417 ************************************ 00:17:16.417 17:47:39 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:16.417 * Looking for test storage... 00:17:16.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:16.417 17:47:39 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:16.417 17:47:39 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:16.417 17:47:39 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:16.676 17:47:39 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.676 17:47:39 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:16.676 17:47:39 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.676 17:47:39 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:16.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.676 --rc genhtml_branch_coverage=1 00:17:16.676 --rc genhtml_function_coverage=1 00:17:16.676 --rc genhtml_legend=1 00:17:16.676 --rc geninfo_all_blocks=1 00:17:16.676 --rc geninfo_unexecuted_blocks=1 00:17:16.676 00:17:16.676 ' 00:17:16.676 17:47:39 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:16.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.676 --rc genhtml_branch_coverage=1 00:17:16.676 --rc genhtml_function_coverage=1 00:17:16.676 --rc genhtml_legend=1 00:17:16.676 --rc geninfo_all_blocks=1 00:17:16.676 --rc geninfo_unexecuted_blocks=1 00:17:16.676 00:17:16.676 ' 00:17:16.676 17:47:39 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:16.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.676 --rc genhtml_branch_coverage=1 00:17:16.676 --rc genhtml_function_coverage=1 00:17:16.676 --rc genhtml_legend=1 00:17:16.676 --rc geninfo_all_blocks=1 00:17:16.676 --rc geninfo_unexecuted_blocks=1 00:17:16.676 00:17:16.676 ' 00:17:16.676 17:47:39 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:16.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.676 --rc genhtml_branch_coverage=1 00:17:16.676 --rc genhtml_function_coverage=1 00:17:16.676 --rc genhtml_legend=1 00:17:16.676 --rc geninfo_all_blocks=1 00:17:16.676 --rc geninfo_unexecuted_blocks=1 00:17:16.676 00:17:16.676 ' 00:17:16.676 17:47:39 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:16.676 17:47:39 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:16.676 17:47:39 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:16.676 17:47:39 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:16.676 17:47:39 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:16.676 17:47:39 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:16.676 17:47:39 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:16.676 17:47:39 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:16.676 17:47:39 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:16.676 17:47:39 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:16.676 17:47:40 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74222 00:17:16.676 17:47:40 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.676 17:47:40 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74222 00:17:16.676 17:47:40 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74222 ']' 00:17:16.676 17:47:40 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:16.676 17:47:40 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.676 17:47:40 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.676 17:47:40 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.676 17:47:40 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.676 17:47:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.676 [2024-11-20 17:47:40.078158] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:17:16.676 [2024-11-20 17:47:40.078626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74222 ] 00:17:16.935 [2024-11-20 17:47:40.234118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:16.935 [2024-11-20 17:47:40.314579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.935 [2024-11-20 17:47:40.314693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.501 17:47:40 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.501 17:47:40 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:17.501 17:47:40 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:17.501 17:47:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.501 17:47:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.501 [2024-11-20 17:47:40.911891] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:17.501 [2024-11-20 17:47:40.913446] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:17.501 17:47:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.501 17:47:40 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:17.501 17:47:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.501 17:47:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.501 malloc0 00:17:17.501 17:47:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.501 17:47:40 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:17.501 17:47:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.501 17:47:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.501 [2024-11-20 17:47:40.999991] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:17.501 [2024-11-20 17:47:41.000076] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:17.501 [2024-11-20 17:47:41.000085] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:17.501 [2024-11-20 17:47:41.000092] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:17.501 [2024-11-20 17:47:41.008962] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:17.501 [2024-11-20 17:47:41.008978] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:17.501 [2024-11-20 17:47:41.015891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:17.501 [2024-11-20 17:47:41.016007] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:17.501 [2024-11-20 17:47:41.037895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:17.759 1 00:17:17.759 17:47:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.759 17:47:41 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:18.690 17:47:42 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74261 00:17:18.690 17:47:42 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:18.690 17:47:42 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:18.690 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:18.690 fio-3.35 00:17:18.690 Starting 1 process 00:17:23.954 17:47:47 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74222 00:17:23.955 17:47:47 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:29.217 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74222 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:29.217 17:47:52 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74367 00:17:29.217 17:47:52 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:29.217 17:47:52 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.217 17:47:52 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74367 00:17:29.217 17:47:52 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74367 ']' 00:17:29.217 17:47:52 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.217 17:47:52 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.217 17:47:52 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.217 17:47:52 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.217 17:47:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.217 [2024-11-20 17:47:52.137021] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:17:29.217 [2024-11-20 17:47:52.137143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74367 ] 00:17:29.217 [2024-11-20 17:47:52.293628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:29.217 [2024-11-20 17:47:52.377475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.217 [2024-11-20 17:47:52.377561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.475 17:47:52 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.475 17:47:52 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:29.475 17:47:52 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:29.475 17:47:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.475 17:47:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.475 [2024-11-20 17:47:52.979892] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:29.475 [2024-11-20 17:47:52.981518] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:29.475 17:47:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.475 17:47:52 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:29.475 17:47:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.475 17:47:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.733 malloc0 00:17:29.733 17:47:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.733 17:47:53 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:29.733 17:47:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.733 17:47:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.733 [2024-11-20 17:47:53.068000] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:29.733 [2024-11-20 17:47:53.068031] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:29.733 [2024-11-20 17:47:53.068039] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:29.733 [2024-11-20 17:47:53.075918] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:29.733 [2024-11-20 17:47:53.075938] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:29.733 [2024-11-20 17:47:53.075946] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:29.733 [2024-11-20 17:47:53.076009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:29.733 1 00:17:29.733 17:47:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.733 17:47:53 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74261 00:17:29.733 [2024-11-20 17:47:53.083895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:29.733 [2024-11-20 17:47:53.090208] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:29.733 [2024-11-20 17:47:53.098031] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:29.733 [2024-11-20 17:47:53.098051] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:25.978 00:18:25.978 fio_test: (groupid=0, jobs=1): err= 0: pid=74265: Wed Nov 20 17:48:42 2024 00:18:25.978 read: IOPS=27.7k, BW=108MiB/s (114MB/s)(6496MiB/60002msec) 00:18:25.978 slat (nsec): min=923, max=227582, avg=4889.13, stdev=1508.11 00:18:25.978 clat (usec): min=564, max=6053.3k, avg=2260.49, stdev=36945.59 00:18:25.978 lat (usec): min=569, max=6053.3k, avg=2265.38, stdev=36945.59 00:18:25.978 clat percentiles (usec): 00:18:25.978 | 1.00th=[ 1663], 5.00th=[ 1811], 10.00th=[ 1844], 20.00th=[ 1876], 00:18:25.978 | 30.00th=[ 1893], 40.00th=[ 1909], 50.00th=[ 1926], 60.00th=[ 1942], 00:18:25.978 | 70.00th=[ 1958], 80.00th=[ 1975], 90.00th=[ 2073], 95.00th=[ 2868], 00:18:25.978 | 99.00th=[ 4752], 99.50th=[ 5145], 99.90th=[ 6456], 99.95th=[ 7177], 00:18:25.978 | 99.99th=[12780] 00:18:25.978 bw ( KiB/s): min=17992, max=129320, per=100.00%, avg=122129.56, stdev=13393.87, samples=108 00:18:25.978 iops : min= 4498, max=32330, avg=30532.39, stdev=3348.47, samples=108 00:18:25.978 write: IOPS=27.7k, BW=108MiB/s (113MB/s)(6491MiB/60002msec); 0 zone resets 00:18:25.978 slat (nsec): min=999, max=420284, avg=4927.34, stdev=1561.49 00:18:25.978 clat (usec): min=595, max=6053.4k, avg=2348.72, stdev=38132.79 00:18:25.978 lat (usec): min=599, max=6053.4k, avg=2353.65, stdev=38132.79 00:18:25.978 clat percentiles (usec): 00:18:25.978 | 1.00th=[ 1696], 5.00th=[ 1876], 10.00th=[ 1926], 20.00th=[ 1958], 00:18:25.978 | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 2008], 60.00th=[ 2024], 00:18:25.978 | 70.00th=[ 2040], 80.00th=[ 2073], 90.00th=[ 2147], 95.00th=[ 2835], 00:18:25.978 | 99.00th=[ 4686], 99.50th=[ 5145], 99.90th=[ 6456], 99.95th=[ 7242], 00:18:25.978 | 99.99th=[12911] 00:18:25.978 bw ( KiB/s): min=17760, max=129032, per=100.00%, avg=122017.11, stdev=13456.15, samples=108 00:18:25.978 iops : min= 4440, max=32258, avg=30504.28, stdev=3364.04, samples=108 00:18:25.978 lat (usec) : 750=0.01%, 1000=0.01% 00:18:25.978 lat (msec) : 2=64.72%, 4=32.98%, 10=2.28%, 20=0.01%, >=2000=0.01% 00:18:25.978 cpu : usr=6.24%, sys=28.15%, ctx=115470, majf=0, minf=14 00:18:25.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:25.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:25.978 issued rwts: total=1663006,1661745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:25.978 00:18:25.978 Run status group 0 (all jobs): 00:18:25.978 READ: bw=108MiB/s (114MB/s), 108MiB/s-108MiB/s (114MB/s-114MB/s), io=6496MiB (6812MB), run=60002-60002msec 00:18:25.978 WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=6491MiB (6807MB), run=60002-60002msec 00:18:25.978 00:18:25.978 Disk stats (read/write): 00:18:25.978 ublkb1: ios=1659780/1658450, merge=0/0, ticks=3660572/3673962, in_queue=7334534, util=99.90% 00:18:25.978 17:48:42 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.978 [2024-11-20 17:48:42.295688] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:25.978 [2024-11-20 17:48:42.329982] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:25.978 [2024-11-20 17:48:42.330109] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:25.978 [2024-11-20 17:48:42.338915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:25.978 [2024-11-20 17:48:42.339005] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:25.978 [2024-11-20 17:48:42.339014] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.978 17:48:42 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.978 [2024-11-20 17:48:42.347966] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:25.978 [2024-11-20 17:48:42.353884] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:25.978 [2024-11-20 17:48:42.353913] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.978 17:48:42 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:25.978 17:48:42 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:25.978 17:48:42 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74367 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74367 ']' 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74367 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74367 00:18:25.978 killing process with pid 74367 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74367' 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74367 00:18:25.978 17:48:42 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74367 00:18:25.978 [2024-11-20 17:48:43.499009] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:25.978 [2024-11-20 17:48:43.499052] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:25.978 ************************************ 00:18:25.978 END TEST ublk_recovery 00:18:25.978 ************************************ 00:18:25.978 00:18:25.978 real 1m4.376s 00:18:25.978 user 1m43.301s 00:18:25.978 sys 0m35.386s 00:18:25.978 17:48:44 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.978 17:48:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.978 17:48:44 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:25.978 17:48:44 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:25.978 17:48:44 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:25.978 17:48:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.978 17:48:44 -- common/autotest_common.sh@10 -- # set +x 00:18:25.978 17:48:44 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:25.978 17:48:44 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:25.978 17:48:44 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:25.978 17:48:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:25.978 17:48:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:25.978 17:48:44 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:25.978 17:48:44 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:25.978 17:48:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:25.978 17:48:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:25.978 17:48:44 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:25.978 17:48:44 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:25.978 17:48:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:25.978 17:48:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.978 17:48:44 -- common/autotest_common.sh@10 -- # set +x 00:18:25.978 ************************************ 00:18:25.978 START TEST ftl 00:18:25.978 ************************************ 00:18:25.978 17:48:44 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:25.978 * Looking for test storage... 00:18:25.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.978 17:48:44 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:25.978 17:48:44 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:18:25.978 17:48:44 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:25.978 17:48:44 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:25.978 17:48:44 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.978 17:48:44 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.978 17:48:44 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.978 17:48:44 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.978 17:48:44 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.979 17:48:44 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.979 17:48:44 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.979 17:48:44 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.979 17:48:44 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.979 17:48:44 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.979 17:48:44 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.979 17:48:44 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:25.979 17:48:44 ftl -- scripts/common.sh@345 -- # : 1 00:18:25.979 17:48:44 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.979 17:48:44 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.979 17:48:44 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:25.979 17:48:44 ftl -- scripts/common.sh@353 -- # local d=1 00:18:25.979 17:48:44 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.979 17:48:44 ftl -- scripts/common.sh@355 -- # echo 1 00:18:25.979 17:48:44 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.979 17:48:44 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:25.979 17:48:44 ftl -- scripts/common.sh@353 -- # local d=2 00:18:25.979 17:48:44 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.979 17:48:44 ftl -- scripts/common.sh@355 -- # echo 2 00:18:25.979 17:48:44 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.979 17:48:44 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.979 17:48:44 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.979 17:48:44 ftl -- scripts/common.sh@368 -- # return 0 00:18:25.979 17:48:44 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.979 17:48:44 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.979 --rc genhtml_branch_coverage=1 00:18:25.979 --rc genhtml_function_coverage=1 00:18:25.979 --rc genhtml_legend=1 00:18:25.979 --rc geninfo_all_blocks=1 00:18:25.979 --rc geninfo_unexecuted_blocks=1 00:18:25.979 00:18:25.979 ' 00:18:25.979 17:48:44 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.979 --rc genhtml_branch_coverage=1 00:18:25.979 --rc genhtml_function_coverage=1 00:18:25.979 --rc genhtml_legend=1 00:18:25.979 --rc geninfo_all_blocks=1 00:18:25.979 --rc geninfo_unexecuted_blocks=1 00:18:25.979 00:18:25.979 ' 00:18:25.979 17:48:44 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.979 --rc genhtml_branch_coverage=1 00:18:25.979 --rc genhtml_function_coverage=1 00:18:25.979 --rc genhtml_legend=1 00:18:25.979 --rc geninfo_all_blocks=1 00:18:25.979 --rc geninfo_unexecuted_blocks=1 00:18:25.979 00:18:25.979 ' 00:18:25.979 17:48:44 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.979 --rc genhtml_branch_coverage=1 00:18:25.979 --rc genhtml_function_coverage=1 00:18:25.979 --rc genhtml_legend=1 00:18:25.979 --rc geninfo_all_blocks=1 00:18:25.979 --rc geninfo_unexecuted_blocks=1 00:18:25.979 00:18:25.979 ' 00:18:25.979 17:48:44 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:25.979 17:48:44 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:25.979 17:48:44 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.979 17:48:44 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.979 17:48:44 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:25.979 17:48:44 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:25.979 17:48:44 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.979 17:48:44 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:25.979 17:48:44 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:25.979 17:48:44 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.979 17:48:44 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.979 17:48:44 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:25.979 17:48:44 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:25.979 17:48:44 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.979 17:48:44 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.979 17:48:44 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:25.979 17:48:44 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:25.979 17:48:44 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.979 17:48:44 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.979 17:48:44 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:25.979 17:48:44 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:25.979 17:48:44 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.979 17:48:44 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.979 17:48:44 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.979 17:48:44 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.979 17:48:44 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:25.979 17:48:44 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:25.979 17:48:44 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.979 17:48:44 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.979 17:48:44 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.979 17:48:44 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:25.979 17:48:44 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:25.979 17:48:44 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:25.979 17:48:44 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:25.979 17:48:44 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:25.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:25.979 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:25.979 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:25.979 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:25.979 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:25.979 17:48:44 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75167 00:18:25.979 17:48:44 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:25.979 17:48:44 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75167 00:18:25.979 17:48:44 ftl -- common/autotest_common.sh@835 -- # '[' -z 75167 ']' 00:18:25.979 17:48:44 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.979 17:48:44 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.979 17:48:44 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.979 17:48:44 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.979 17:48:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:25.979 [2024-11-20 17:48:45.033461] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:18:25.979 [2024-11-20 17:48:45.033765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75167 ] 00:18:25.979 [2024-11-20 17:48:45.190367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.979 [2024-11-20 17:48:45.282957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.979 17:48:45 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.979 17:48:45 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:25.979 17:48:45 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:25.979 17:48:46 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:25.979 17:48:46 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:25.979 17:48:46 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@50 -- # break 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:25.979 17:48:47 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:25.980 17:48:47 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:25.980 17:48:47 ftl -- ftl/ftl.sh@63 -- # break 00:18:25.980 17:48:47 ftl -- ftl/ftl.sh@66 -- # killprocess 75167 00:18:25.980 17:48:47 ftl -- common/autotest_common.sh@954 -- # '[' -z 75167 ']' 00:18:25.980 17:48:47 ftl -- common/autotest_common.sh@958 -- # kill -0 75167 00:18:25.980 17:48:47 ftl -- common/autotest_common.sh@959 -- # uname 00:18:25.980 17:48:47 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.980 17:48:47 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75167 00:18:25.980 killing process with pid 75167 00:18:25.980 17:48:47 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.980 17:48:47 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.980 17:48:47 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75167' 00:18:25.980 17:48:47 ftl -- common/autotest_common.sh@973 -- # kill 75167 00:18:25.980 17:48:47 ftl -- common/autotest_common.sh@978 -- # wait 75167 00:18:25.980 17:48:49 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:25.980 17:48:49 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:25.980 17:48:49 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:25.980 17:48:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.980 17:48:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:25.980 ************************************ 00:18:25.980 START TEST ftl_fio_basic 00:18:25.980 ************************************ 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:25.980 * Looking for test storage... 00:18:25.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:25.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.980 --rc genhtml_branch_coverage=1 00:18:25.980 --rc genhtml_function_coverage=1 00:18:25.980 --rc genhtml_legend=1 00:18:25.980 --rc geninfo_all_blocks=1 00:18:25.980 --rc geninfo_unexecuted_blocks=1 00:18:25.980 00:18:25.980 ' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:25.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.980 --rc genhtml_branch_coverage=1 00:18:25.980 --rc genhtml_function_coverage=1 00:18:25.980 --rc genhtml_legend=1 00:18:25.980 --rc geninfo_all_blocks=1 00:18:25.980 --rc geninfo_unexecuted_blocks=1 00:18:25.980 00:18:25.980 ' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:25.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.980 --rc genhtml_branch_coverage=1 00:18:25.980 --rc genhtml_function_coverage=1 00:18:25.980 --rc genhtml_legend=1 00:18:25.980 --rc geninfo_all_blocks=1 00:18:25.980 --rc geninfo_unexecuted_blocks=1 00:18:25.980 00:18:25.980 ' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:25.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.980 --rc genhtml_branch_coverage=1 00:18:25.980 --rc genhtml_function_coverage=1 00:18:25.980 --rc genhtml_legend=1 00:18:25.980 --rc geninfo_all_blocks=1 00:18:25.980 --rc geninfo_unexecuted_blocks=1 00:18:25.980 00:18:25.980 ' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:25.980 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75305 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75305 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75305 ']' 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.981 17:48:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:25.981 [2024-11-20 17:48:49.442804] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:18:25.981 [2024-11-20 17:48:49.442931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75305 ] 00:18:26.242 [2024-11-20 17:48:49.597956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:26.242 [2024-11-20 17:48:49.700983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.242 [2024-11-20 17:48:49.701442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.242 [2024-11-20 17:48:49.701536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.817 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.817 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:26.817 17:48:50 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:26.817 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:26.817 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:26.817 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:26.817 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:26.817 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:27.392 { 00:18:27.392 "name": "nvme0n1", 00:18:27.392 "aliases": [ 00:18:27.392 "e96ea047-a0b3-4934-96d3-ee8af8b11257" 00:18:27.392 ], 00:18:27.392 "product_name": "NVMe disk", 00:18:27.392 "block_size": 4096, 00:18:27.392 "num_blocks": 1310720, 00:18:27.392 "uuid": "e96ea047-a0b3-4934-96d3-ee8af8b11257", 00:18:27.392 "numa_id": -1, 00:18:27.392 "assigned_rate_limits": { 00:18:27.392 "rw_ios_per_sec": 0, 00:18:27.392 "rw_mbytes_per_sec": 0, 00:18:27.392 "r_mbytes_per_sec": 0, 00:18:27.392 "w_mbytes_per_sec": 0 00:18:27.392 }, 00:18:27.392 "claimed": false, 00:18:27.392 "zoned": false, 00:18:27.392 "supported_io_types": { 00:18:27.392 "read": true, 00:18:27.392 "write": true, 00:18:27.392 "unmap": true, 00:18:27.392 "flush": true, 00:18:27.392 "reset": true, 00:18:27.392 "nvme_admin": true, 00:18:27.392 "nvme_io": true, 00:18:27.392 "nvme_io_md": false, 00:18:27.392 "write_zeroes": true, 00:18:27.392 "zcopy": false, 00:18:27.392 "get_zone_info": false, 00:18:27.392 "zone_management": false, 00:18:27.392 "zone_append": false, 00:18:27.392 "compare": true, 00:18:27.392 "compare_and_write": false, 00:18:27.392 "abort": true, 00:18:27.392 "seek_hole": false, 00:18:27.392 "seek_data": false, 00:18:27.392 "copy": true, 00:18:27.392 "nvme_iov_md": false 00:18:27.392 }, 00:18:27.392 "driver_specific": { 00:18:27.392 "nvme": [ 00:18:27.392 { 00:18:27.392 "pci_address": "0000:00:11.0", 00:18:27.392 "trid": { 00:18:27.392 "trtype": "PCIe", 00:18:27.392 "traddr": "0000:00:11.0" 00:18:27.392 }, 00:18:27.392 "ctrlr_data": { 00:18:27.392 "cntlid": 0, 00:18:27.392 "vendor_id": "0x1b36", 00:18:27.392 "model_number": "QEMU NVMe Ctrl", 00:18:27.392 "serial_number": "12341", 00:18:27.392 "firmware_revision": "8.0.0", 00:18:27.392 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:27.392 "oacs": { 00:18:27.392 "security": 0, 00:18:27.392 "format": 1, 00:18:27.392 "firmware": 0, 00:18:27.392 "ns_manage": 1 00:18:27.392 }, 00:18:27.392 "multi_ctrlr": false, 00:18:27.392 "ana_reporting": false 00:18:27.392 }, 00:18:27.392 "vs": { 00:18:27.392 "nvme_version": "1.4" 00:18:27.392 }, 00:18:27.392 "ns_data": { 00:18:27.392 "id": 1, 00:18:27.392 "can_share": false 00:18:27.392 } 00:18:27.392 } 00:18:27.392 ], 00:18:27.392 "mp_policy": "active_passive" 00:18:27.392 } 00:18:27.392 } 00:18:27.392 ]' 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:27.392 17:48:50 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:27.655 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:27.655 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:27.917 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=3d345486-9a0d-4075-a76c-c9ab24bd6228 00:18:27.917 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3d345486-9a0d-4075-a76c-c9ab24bd6228 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:28.177 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:28.488 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:28.489 { 00:18:28.489 "name": "1199acd5-a765-402a-bb04-f8aebf1e15f3", 00:18:28.489 "aliases": [ 00:18:28.489 "lvs/nvme0n1p0" 00:18:28.489 ], 00:18:28.489 "product_name": "Logical Volume", 00:18:28.489 "block_size": 4096, 00:18:28.489 "num_blocks": 26476544, 00:18:28.489 "uuid": "1199acd5-a765-402a-bb04-f8aebf1e15f3", 00:18:28.489 "assigned_rate_limits": { 00:18:28.489 "rw_ios_per_sec": 0, 00:18:28.489 "rw_mbytes_per_sec": 0, 00:18:28.489 "r_mbytes_per_sec": 0, 00:18:28.489 "w_mbytes_per_sec": 0 00:18:28.489 }, 00:18:28.489 "claimed": false, 00:18:28.489 "zoned": false, 00:18:28.489 "supported_io_types": { 00:18:28.489 "read": true, 00:18:28.489 "write": true, 00:18:28.489 "unmap": true, 00:18:28.489 "flush": false, 00:18:28.489 "reset": true, 00:18:28.489 "nvme_admin": false, 00:18:28.489 "nvme_io": false, 00:18:28.489 "nvme_io_md": false, 00:18:28.489 "write_zeroes": true, 00:18:28.489 "zcopy": false, 00:18:28.489 "get_zone_info": false, 00:18:28.489 "zone_management": false, 00:18:28.489 "zone_append": false, 00:18:28.489 "compare": false, 00:18:28.489 "compare_and_write": false, 00:18:28.489 "abort": false, 00:18:28.489 "seek_hole": true, 00:18:28.489 "seek_data": true, 00:18:28.489 "copy": false, 00:18:28.489 "nvme_iov_md": false 00:18:28.489 }, 00:18:28.489 "driver_specific": { 00:18:28.489 "lvol": { 00:18:28.489 "lvol_store_uuid": "3d345486-9a0d-4075-a76c-c9ab24bd6228", 00:18:28.489 "base_bdev": "nvme0n1", 00:18:28.489 "thin_provision": true, 00:18:28.489 "num_allocated_clusters": 0, 00:18:28.489 "snapshot": false, 00:18:28.489 "clone": false, 00:18:28.489 "esnap_clone": false 00:18:28.489 } 00:18:28.489 } 00:18:28.489 } 00:18:28.489 ]' 00:18:28.489 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:28.489 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:28.489 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:28.489 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:28.489 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:28.489 17:48:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:28.489 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:28.489 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:28.489 17:48:51 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:28.764 17:48:52 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:28.764 17:48:52 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:28.764 17:48:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:28.764 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:28.764 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:28.764 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:28.764 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:28.764 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:29.025 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:29.025 { 00:18:29.025 "name": "1199acd5-a765-402a-bb04-f8aebf1e15f3", 00:18:29.025 "aliases": [ 00:18:29.025 "lvs/nvme0n1p0" 00:18:29.025 ], 00:18:29.025 "product_name": "Logical Volume", 00:18:29.025 "block_size": 4096, 00:18:29.025 "num_blocks": 26476544, 00:18:29.025 "uuid": "1199acd5-a765-402a-bb04-f8aebf1e15f3", 00:18:29.025 "assigned_rate_limits": { 00:18:29.025 "rw_ios_per_sec": 0, 00:18:29.025 "rw_mbytes_per_sec": 0, 00:18:29.025 "r_mbytes_per_sec": 0, 00:18:29.025 "w_mbytes_per_sec": 0 00:18:29.025 }, 00:18:29.025 "claimed": false, 00:18:29.025 "zoned": false, 00:18:29.025 "supported_io_types": { 00:18:29.025 "read": true, 00:18:29.025 "write": true, 00:18:29.025 "unmap": true, 00:18:29.025 "flush": false, 00:18:29.025 "reset": true, 00:18:29.025 "nvme_admin": false, 00:18:29.025 "nvme_io": false, 00:18:29.025 "nvme_io_md": false, 00:18:29.025 "write_zeroes": true, 00:18:29.025 "zcopy": false, 00:18:29.025 "get_zone_info": false, 00:18:29.025 "zone_management": false, 00:18:29.025 "zone_append": false, 00:18:29.025 "compare": false, 00:18:29.025 "compare_and_write": false, 00:18:29.025 "abort": false, 00:18:29.025 "seek_hole": true, 00:18:29.026 "seek_data": true, 00:18:29.026 "copy": false, 00:18:29.026 "nvme_iov_md": false 00:18:29.026 }, 00:18:29.026 "driver_specific": { 00:18:29.026 "lvol": { 00:18:29.026 "lvol_store_uuid": "3d345486-9a0d-4075-a76c-c9ab24bd6228", 00:18:29.026 "base_bdev": "nvme0n1", 00:18:29.026 "thin_provision": true, 00:18:29.026 "num_allocated_clusters": 0, 00:18:29.026 "snapshot": false, 00:18:29.026 "clone": false, 00:18:29.026 "esnap_clone": false 00:18:29.026 } 00:18:29.026 } 00:18:29.026 } 00:18:29.026 ]' 00:18:29.026 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:29.026 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:29.026 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:29.026 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:29.026 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:29.026 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:29.026 17:48:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:29.026 17:48:52 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:29.287 17:48:52 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:29.287 17:48:52 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:29.287 17:48:52 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:29.287 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:29.287 17:48:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:29.287 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:29.288 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:29.288 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:29.288 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:29.288 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1199acd5-a765-402a-bb04-f8aebf1e15f3 00:18:29.288 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:29.288 { 00:18:29.288 "name": "1199acd5-a765-402a-bb04-f8aebf1e15f3", 00:18:29.288 "aliases": [ 00:18:29.288 "lvs/nvme0n1p0" 00:18:29.288 ], 00:18:29.288 "product_name": "Logical Volume", 00:18:29.288 "block_size": 4096, 00:18:29.288 "num_blocks": 26476544, 00:18:29.288 "uuid": "1199acd5-a765-402a-bb04-f8aebf1e15f3", 00:18:29.288 "assigned_rate_limits": { 00:18:29.288 "rw_ios_per_sec": 0, 00:18:29.288 "rw_mbytes_per_sec": 0, 00:18:29.288 "r_mbytes_per_sec": 0, 00:18:29.288 "w_mbytes_per_sec": 0 00:18:29.288 }, 00:18:29.288 "claimed": false, 00:18:29.288 "zoned": false, 00:18:29.288 "supported_io_types": { 00:18:29.288 "read": true, 00:18:29.288 "write": true, 00:18:29.288 "unmap": true, 00:18:29.288 "flush": false, 00:18:29.288 "reset": true, 00:18:29.288 "nvme_admin": false, 00:18:29.288 "nvme_io": false, 00:18:29.288 "nvme_io_md": false, 00:18:29.288 "write_zeroes": true, 00:18:29.288 "zcopy": false, 00:18:29.288 "get_zone_info": false, 00:18:29.288 "zone_management": false, 00:18:29.288 "zone_append": false, 00:18:29.288 "compare": false, 00:18:29.288 "compare_and_write": false, 00:18:29.288 "abort": false, 00:18:29.288 "seek_hole": true, 00:18:29.288 "seek_data": true, 00:18:29.288 "copy": false, 00:18:29.288 "nvme_iov_md": false 00:18:29.288 }, 00:18:29.288 "driver_specific": { 00:18:29.288 "lvol": { 00:18:29.288 "lvol_store_uuid": "3d345486-9a0d-4075-a76c-c9ab24bd6228", 00:18:29.288 "base_bdev": "nvme0n1", 00:18:29.288 "thin_provision": true, 00:18:29.288 "num_allocated_clusters": 0, 00:18:29.288 "snapshot": false, 00:18:29.288 "clone": false, 00:18:29.288 "esnap_clone": false 00:18:29.288 } 00:18:29.288 } 00:18:29.288 } 00:18:29.288 ]' 00:18:29.550 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:29.550 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:29.550 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:29.550 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:29.550 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:29.550 17:48:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:29.550 17:48:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:29.550 17:48:52 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:29.550 17:48:52 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1199acd5-a765-402a-bb04-f8aebf1e15f3 -c nvc0n1p0 --l2p_dram_limit 60 00:18:29.550 [2024-11-20 17:48:53.071494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.550 [2024-11-20 17:48:53.071725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:29.550 [2024-11-20 17:48:53.071755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:29.550 [2024-11-20 17:48:53.071765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.550 [2024-11-20 17:48:53.071905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.550 [2024-11-20 17:48:53.071921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:29.550 [2024-11-20 17:48:53.071934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:29.550 [2024-11-20 17:48:53.071942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.550 [2024-11-20 17:48:53.071997] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:29.550 [2024-11-20 17:48:53.072784] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:29.550 [2024-11-20 17:48:53.072809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.550 [2024-11-20 17:48:53.072819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:29.550 [2024-11-20 17:48:53.072831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:18:29.550 [2024-11-20 17:48:53.072839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.550 [2024-11-20 17:48:53.072907] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 98d59553-1bd4-460c-a4a1-4e7623e3f3f3 00:18:29.550 [2024-11-20 17:48:53.074622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.550 [2024-11-20 17:48:53.074676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:29.550 [2024-11-20 17:48:53.074688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:29.550 [2024-11-20 17:48:53.074699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.550 [2024-11-20 17:48:53.083603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.550 [2024-11-20 17:48:53.083660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:29.550 [2024-11-20 17:48:53.083672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.792 ms 00:18:29.550 [2024-11-20 17:48:53.083684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.550 [2024-11-20 17:48:53.083804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.550 [2024-11-20 17:48:53.083816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:29.550 [2024-11-20 17:48:53.083825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:29.550 [2024-11-20 17:48:53.083840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.550 [2024-11-20 17:48:53.083958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.550 [2024-11-20 17:48:53.083973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:29.550 [2024-11-20 17:48:53.083982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:29.550 [2024-11-20 17:48:53.083992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.550 [2024-11-20 17:48:53.084046] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:29.813 [2024-11-20 17:48:53.088392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.813 [2024-11-20 17:48:53.088435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:29.813 [2024-11-20 17:48:53.088454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.364 ms 00:18:29.813 [2024-11-20 17:48:53.088462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.813 [2024-11-20 17:48:53.088518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.813 [2024-11-20 17:48:53.088528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:29.813 [2024-11-20 17:48:53.088539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:29.813 [2024-11-20 17:48:53.088548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.813 [2024-11-20 17:48:53.088608] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:29.813 [2024-11-20 17:48:53.088774] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:29.813 [2024-11-20 17:48:53.088792] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:29.813 [2024-11-20 17:48:53.088803] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:29.813 [2024-11-20 17:48:53.088816] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:29.813 [2024-11-20 17:48:53.088825] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:29.813 [2024-11-20 17:48:53.088836] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:29.813 [2024-11-20 17:48:53.088844] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:29.813 [2024-11-20 17:48:53.088854] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:29.814 [2024-11-20 17:48:53.088863] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:29.814 [2024-11-20 17:48:53.088893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.814 [2024-11-20 17:48:53.088901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:29.814 [2024-11-20 17:48:53.088913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:18:29.814 [2024-11-20 17:48:53.088920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.814 [2024-11-20 17:48:53.089016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.814 [2024-11-20 17:48:53.089025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:29.814 [2024-11-20 17:48:53.089036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:29.814 [2024-11-20 17:48:53.089043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.814 [2024-11-20 17:48:53.089175] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:29.814 [2024-11-20 17:48:53.089187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:29.814 [2024-11-20 17:48:53.089199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:29.814 [2024-11-20 17:48:53.089207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:29.814 [2024-11-20 17:48:53.089224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:29.814 [2024-11-20 17:48:53.089239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:29.814 [2024-11-20 17:48:53.089249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:29.814 [2024-11-20 17:48:53.089264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:29.814 [2024-11-20 17:48:53.089271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:29.814 [2024-11-20 17:48:53.089280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:29.814 [2024-11-20 17:48:53.089288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:29.814 [2024-11-20 17:48:53.089297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:29.814 [2024-11-20 17:48:53.089303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:29.814 [2024-11-20 17:48:53.089327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:29.814 [2024-11-20 17:48:53.089336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:29.814 [2024-11-20 17:48:53.089352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.814 [2024-11-20 17:48:53.089369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:29.814 [2024-11-20 17:48:53.089376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.814 [2024-11-20 17:48:53.089391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:29.814 [2024-11-20 17:48:53.089400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.814 [2024-11-20 17:48:53.089416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:29.814 [2024-11-20 17:48:53.089427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.814 [2024-11-20 17:48:53.089443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:29.814 [2024-11-20 17:48:53.089454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:29.814 [2024-11-20 17:48:53.089470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:29.814 [2024-11-20 17:48:53.089491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:29.814 [2024-11-20 17:48:53.089500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:29.814 [2024-11-20 17:48:53.089507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:29.814 [2024-11-20 17:48:53.089516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:29.814 [2024-11-20 17:48:53.089522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:29.814 [2024-11-20 17:48:53.089538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:29.814 [2024-11-20 17:48:53.089547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089553] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:29.814 [2024-11-20 17:48:53.089563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:29.814 [2024-11-20 17:48:53.089570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:29.814 [2024-11-20 17:48:53.089579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.814 [2024-11-20 17:48:53.089587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:29.814 [2024-11-20 17:48:53.089597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:29.814 [2024-11-20 17:48:53.089607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:29.814 [2024-11-20 17:48:53.089616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:29.814 [2024-11-20 17:48:53.089623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:29.814 [2024-11-20 17:48:53.089632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:29.814 [2024-11-20 17:48:53.089649] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:29.814 [2024-11-20 17:48:53.089664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:29.814 [2024-11-20 17:48:53.089679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:29.814 [2024-11-20 17:48:53.089689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:29.814 [2024-11-20 17:48:53.089697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:29.814 [2024-11-20 17:48:53.089705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:29.814 [2024-11-20 17:48:53.089713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:29.814 [2024-11-20 17:48:53.089721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:29.814 [2024-11-20 17:48:53.089729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:29.814 [2024-11-20 17:48:53.089738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:29.814 [2024-11-20 17:48:53.089745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:29.814 [2024-11-20 17:48:53.089756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:29.814 [2024-11-20 17:48:53.089763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:29.814 [2024-11-20 17:48:53.089773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:29.814 [2024-11-20 17:48:53.089781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:29.814 [2024-11-20 17:48:53.089790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:29.814 [2024-11-20 17:48:53.089797] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:29.814 [2024-11-20 17:48:53.089809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:29.814 [2024-11-20 17:48:53.089817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:29.814 [2024-11-20 17:48:53.089826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:29.814 [2024-11-20 17:48:53.089833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:29.814 [2024-11-20 17:48:53.089843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:29.814 [2024-11-20 17:48:53.089851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.814 [2024-11-20 17:48:53.089861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:29.814 [2024-11-20 17:48:53.089883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:18:29.814 [2024-11-20 17:48:53.089893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.814 [2024-11-20 17:48:53.089963] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:29.814 [2024-11-20 17:48:53.089977] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:33.118 [2024-11-20 17:48:56.544863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.118 [2024-11-20 17:48:56.544923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:33.118 [2024-11-20 17:48:56.544937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3454.890 ms 00:18:33.118 [2024-11-20 17:48:56.544947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.118 [2024-11-20 17:48:56.570238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.118 [2024-11-20 17:48:56.570283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.118 [2024-11-20 17:48:56.570294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.083 ms 00:18:33.118 [2024-11-20 17:48:56.570304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.118 [2024-11-20 17:48:56.570429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.118 [2024-11-20 17:48:56.570441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:33.118 [2024-11-20 17:48:56.570449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:33.118 [2024-11-20 17:48:56.570460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.118 [2024-11-20 17:48:56.610847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.118 [2024-11-20 17:48:56.611027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.118 [2024-11-20 17:48:56.611046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.344 ms 00:18:33.118 [2024-11-20 17:48:56.611058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.118 [2024-11-20 17:48:56.611098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.118 [2024-11-20 17:48:56.611109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.118 [2024-11-20 17:48:56.611118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:33.118 [2024-11-20 17:48:56.611127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.118 [2024-11-20 17:48:56.611489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.118 [2024-11-20 17:48:56.611508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.118 [2024-11-20 17:48:56.611519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:33.118 [2024-11-20 17:48:56.611527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.118 [2024-11-20 17:48:56.611654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.119 [2024-11-20 17:48:56.611665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.119 [2024-11-20 17:48:56.611672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:18:33.119 [2024-11-20 17:48:56.611684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.119 [2024-11-20 17:48:56.625884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.119 [2024-11-20 17:48:56.625918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.119 [2024-11-20 17:48:56.625928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.173 ms 00:18:33.119 [2024-11-20 17:48:56.625938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.119 [2024-11-20 17:48:56.637258] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:33.119 [2024-11-20 17:48:56.651464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.119 [2024-11-20 17:48:56.651513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:33.119 [2024-11-20 17:48:56.651528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.441 ms 00:18:33.119 [2024-11-20 17:48:56.651535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.705665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.379 [2024-11-20 17:48:56.705707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:33.379 [2024-11-20 17:48:56.705724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.095 ms 00:18:33.379 [2024-11-20 17:48:56.705732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.705928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.379 [2024-11-20 17:48:56.705941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:33.379 [2024-11-20 17:48:56.705954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:18:33.379 [2024-11-20 17:48:56.705961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.729529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.379 [2024-11-20 17:48:56.729682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:33.379 [2024-11-20 17:48:56.729702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.515 ms 00:18:33.379 [2024-11-20 17:48:56.729710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.752142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.379 [2024-11-20 17:48:56.752261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:33.379 [2024-11-20 17:48:56.752280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.393 ms 00:18:33.379 [2024-11-20 17:48:56.752287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.752852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.379 [2024-11-20 17:48:56.752880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:33.379 [2024-11-20 17:48:56.752892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:18:33.379 [2024-11-20 17:48:56.752899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.825143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.379 [2024-11-20 17:48:56.825191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:33.379 [2024-11-20 17:48:56.825212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.200 ms 00:18:33.379 [2024-11-20 17:48:56.825220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.849616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.379 [2024-11-20 17:48:56.849661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:33.379 [2024-11-20 17:48:56.849676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.280 ms 00:18:33.379 [2024-11-20 17:48:56.849684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.873079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.379 [2024-11-20 17:48:56.873122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:33.379 [2024-11-20 17:48:56.873134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.346 ms 00:18:33.379 [2024-11-20 17:48:56.873142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.896772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.379 [2024-11-20 17:48:56.896833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:33.379 [2024-11-20 17:48:56.896847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.579 ms 00:18:33.379 [2024-11-20 17:48:56.896855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.896918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.379 [2024-11-20 17:48:56.896928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:33.379 [2024-11-20 17:48:56.896943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:33.379 [2024-11-20 17:48:56.896950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.897036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.379 [2024-11-20 17:48:56.897046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:33.379 [2024-11-20 17:48:56.897057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:33.379 [2024-11-20 17:48:56.897064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.379 [2024-11-20 17:48:56.897951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3826.046 ms, result 0 00:18:33.379 { 00:18:33.379 "name": "ftl0", 00:18:33.379 "uuid": "98d59553-1bd4-460c-a4a1-4e7623e3f3f3" 00:18:33.379 } 00:18:33.640 17:48:56 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:33.640 17:48:56 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:33.640 17:48:56 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:33.640 17:48:56 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:33.640 17:48:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:33.640 17:48:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:33.640 17:48:56 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:33.640 17:48:57 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:33.900 [ 00:18:33.900 { 00:18:33.900 "name": "ftl0", 00:18:33.900 "aliases": [ 00:18:33.900 "98d59553-1bd4-460c-a4a1-4e7623e3f3f3" 00:18:33.900 ], 00:18:33.900 "product_name": "FTL disk", 00:18:33.900 "block_size": 4096, 00:18:33.900 "num_blocks": 20971520, 00:18:33.900 "uuid": "98d59553-1bd4-460c-a4a1-4e7623e3f3f3", 00:18:33.900 "assigned_rate_limits": { 00:18:33.900 "rw_ios_per_sec": 0, 00:18:33.900 "rw_mbytes_per_sec": 0, 00:18:33.900 "r_mbytes_per_sec": 0, 00:18:33.900 "w_mbytes_per_sec": 0 00:18:33.900 }, 00:18:33.900 "claimed": false, 00:18:33.900 "zoned": false, 00:18:33.900 "supported_io_types": { 00:18:33.900 "read": true, 00:18:33.900 "write": true, 00:18:33.900 "unmap": true, 00:18:33.900 "flush": true, 00:18:33.900 "reset": false, 00:18:33.900 "nvme_admin": false, 00:18:33.900 "nvme_io": false, 00:18:33.900 "nvme_io_md": false, 00:18:33.900 "write_zeroes": true, 00:18:33.900 "zcopy": false, 00:18:33.900 "get_zone_info": false, 00:18:33.900 "zone_management": false, 00:18:33.900 "zone_append": false, 00:18:33.900 "compare": false, 00:18:33.900 "compare_and_write": false, 00:18:33.900 "abort": false, 00:18:33.900 "seek_hole": false, 00:18:33.900 "seek_data": false, 00:18:33.900 "copy": false, 00:18:33.901 "nvme_iov_md": false 00:18:33.901 }, 00:18:33.901 "driver_specific": { 00:18:33.901 "ftl": { 00:18:33.901 "base_bdev": "1199acd5-a765-402a-bb04-f8aebf1e15f3", 00:18:33.901 "cache": "nvc0n1p0" 00:18:33.901 } 00:18:33.901 } 00:18:33.901 } 00:18:33.901 ] 00:18:33.901 17:48:57 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:33.901 17:48:57 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:33.901 17:48:57 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:34.161 17:48:57 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:34.161 17:48:57 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:34.422 [2024-11-20 17:48:57.714751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.422 [2024-11-20 17:48:57.714938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:34.422 [2024-11-20 17:48:57.715001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:34.422 [2024-11-20 17:48:57.715033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.422 [2024-11-20 17:48:57.715148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:34.422 [2024-11-20 17:48:57.717757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.422 [2024-11-20 17:48:57.717789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:34.423 [2024-11-20 17:48:57.717801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:18:34.423 [2024-11-20 17:48:57.717809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.423 [2024-11-20 17:48:57.718235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.423 [2024-11-20 17:48:57.718250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:34.423 [2024-11-20 17:48:57.718261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:18:34.423 [2024-11-20 17:48:57.718268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.423 [2024-11-20 17:48:57.721516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.423 [2024-11-20 17:48:57.721538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:34.423 [2024-11-20 17:48:57.721552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:18:34.423 [2024-11-20 17:48:57.721560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.423 [2024-11-20 17:48:57.727771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.423 [2024-11-20 17:48:57.727802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:34.423 [2024-11-20 17:48:57.727814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.186 ms 00:18:34.423 [2024-11-20 17:48:57.727822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.423 [2024-11-20 17:48:57.750976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.423 [2024-11-20 17:48:57.751009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:34.423 [2024-11-20 17:48:57.751021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.035 ms 00:18:34.423 [2024-11-20 17:48:57.751029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.423 [2024-11-20 17:48:57.766295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.423 [2024-11-20 17:48:57.766418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:34.423 [2024-11-20 17:48:57.766442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.208 ms 00:18:34.423 [2024-11-20 17:48:57.766449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.423 [2024-11-20 17:48:57.766639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.423 [2024-11-20 17:48:57.766651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:34.423 [2024-11-20 17:48:57.766661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:18:34.423 [2024-11-20 17:48:57.766669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.423 [2024-11-20 17:48:57.789730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.423 [2024-11-20 17:48:57.789842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:34.423 [2024-11-20 17:48:57.789860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.036 ms 00:18:34.423 [2024-11-20 17:48:57.789867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.423 [2024-11-20 17:48:57.812604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.423 [2024-11-20 17:48:57.812712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:34.423 [2024-11-20 17:48:57.812731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.685 ms 00:18:34.423 [2024-11-20 17:48:57.812738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.423 [2024-11-20 17:48:57.835433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.423 [2024-11-20 17:48:57.835542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:34.423 [2024-11-20 17:48:57.835559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.655 ms 00:18:34.423 [2024-11-20 17:48:57.835567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.423 [2024-11-20 17:48:57.857689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.423 [2024-11-20 17:48:57.857719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:34.423 [2024-11-20 17:48:57.857731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.020 ms 00:18:34.423 [2024-11-20 17:48:57.857738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.423 [2024-11-20 17:48:57.857780] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:34.423 [2024-11-20 17:48:57.857793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.857996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:34.423 [2024-11-20 17:48:57.858207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:34.424 [2024-11-20 17:48:57.858681] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:34.424 [2024-11-20 17:48:57.858690] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 98d59553-1bd4-460c-a4a1-4e7623e3f3f3 00:18:34.424 [2024-11-20 17:48:57.858697] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:34.424 [2024-11-20 17:48:57.858708] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:34.424 [2024-11-20 17:48:57.858717] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:34.424 [2024-11-20 17:48:57.858726] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:34.424 [2024-11-20 17:48:57.858733] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:34.424 [2024-11-20 17:48:57.858742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:34.424 [2024-11-20 17:48:57.858749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:34.424 [2024-11-20 17:48:57.858757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:34.424 [2024-11-20 17:48:57.858763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:34.424 [2024-11-20 17:48:57.858771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.424 [2024-11-20 17:48:57.858778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:34.424 [2024-11-20 17:48:57.858787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:18:34.424 [2024-11-20 17:48:57.858794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.424 [2024-11-20 17:48:57.871017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.424 [2024-11-20 17:48:57.871045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:34.424 [2024-11-20 17:48:57.871056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.188 ms 00:18:34.424 [2024-11-20 17:48:57.871064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.424 [2024-11-20 17:48:57.871408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.424 [2024-11-20 17:48:57.871417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:34.424 [2024-11-20 17:48:57.871426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:18:34.424 [2024-11-20 17:48:57.871433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.424 [2024-11-20 17:48:57.914765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.424 [2024-11-20 17:48:57.914801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:34.424 [2024-11-20 17:48:57.914813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.424 [2024-11-20 17:48:57.914821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.424 [2024-11-20 17:48:57.914898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.424 [2024-11-20 17:48:57.914907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:34.424 [2024-11-20 17:48:57.914917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.424 [2024-11-20 17:48:57.914924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.424 [2024-11-20 17:48:57.915005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.424 [2024-11-20 17:48:57.915015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:34.424 [2024-11-20 17:48:57.915025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.424 [2024-11-20 17:48:57.915032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.424 [2024-11-20 17:48:57.915061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.424 [2024-11-20 17:48:57.915069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:34.424 [2024-11-20 17:48:57.915077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.424 [2024-11-20 17:48:57.915084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.685 [2024-11-20 17:48:57.994953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.685 [2024-11-20 17:48:57.994992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:34.685 [2024-11-20 17:48:57.995004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.685 [2024-11-20 17:48:57.995012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.685 [2024-11-20 17:48:58.058170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.685 [2024-11-20 17:48:58.058210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:34.685 [2024-11-20 17:48:58.058222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.685 [2024-11-20 17:48:58.058230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.685 [2024-11-20 17:48:58.058314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.685 [2024-11-20 17:48:58.058323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.685 [2024-11-20 17:48:58.058335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.685 [2024-11-20 17:48:58.058343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.685 [2024-11-20 17:48:58.058400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.685 [2024-11-20 17:48:58.058410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.685 [2024-11-20 17:48:58.058419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.685 [2024-11-20 17:48:58.058426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.685 [2024-11-20 17:48:58.058535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.685 [2024-11-20 17:48:58.058545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.685 [2024-11-20 17:48:58.058557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.685 [2024-11-20 17:48:58.058564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.685 [2024-11-20 17:48:58.058616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.685 [2024-11-20 17:48:58.058625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:34.685 [2024-11-20 17:48:58.058635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.685 [2024-11-20 17:48:58.058642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.685 [2024-11-20 17:48:58.058685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.685 [2024-11-20 17:48:58.058693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.685 [2024-11-20 17:48:58.058702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.685 [2024-11-20 17:48:58.058711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.685 [2024-11-20 17:48:58.058756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.685 [2024-11-20 17:48:58.058765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.685 [2024-11-20 17:48:58.058775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.685 [2024-11-20 17:48:58.058782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.685 [2024-11-20 17:48:58.058962] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.157 ms, result 0 00:18:34.685 true 00:18:34.685 17:48:58 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75305 00:18:34.685 17:48:58 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75305 ']' 00:18:34.685 17:48:58 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75305 00:18:34.685 17:48:58 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:34.685 17:48:58 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.685 17:48:58 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75305 00:18:34.685 killing process with pid 75305 00:18:34.685 17:48:58 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.685 17:48:58 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.685 17:48:58 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75305' 00:18:34.685 17:48:58 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75305 00:18:34.685 17:48:58 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75305 00:18:41.292 17:49:04 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:41.292 17:49:04 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:41.292 17:49:04 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:41.292 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.292 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:41.292 17:49:04 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:41.292 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:41.292 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:41.293 17:49:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:41.553 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:41.553 fio-3.35 00:18:41.553 Starting 1 thread 00:18:48.144 00:18:48.144 test: (groupid=0, jobs=1): err= 0: pid=75500: Wed Nov 20 17:49:10 2024 00:18:48.144 read: IOPS=824, BW=54.8MiB/s (57.4MB/s)(255MiB/4648msec) 00:18:48.144 slat (nsec): min=3005, max=32498, avg=5149.69, stdev=2751.98 00:18:48.144 clat (usec): min=260, max=1630, avg=545.07, stdev=209.19 00:18:48.144 lat (usec): min=263, max=1635, avg=550.22, stdev=210.22 00:18:48.144 clat percentiles (usec): 00:18:48.144 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 330], 00:18:48.144 | 30.00th=[ 404], 40.00th=[ 457], 50.00th=[ 519], 60.00th=[ 553], 00:18:48.144 | 70.00th=[ 594], 80.00th=[ 668], 90.00th=[ 906], 95.00th=[ 963], 00:18:48.144 | 99.00th=[ 1090], 99.50th=[ 1156], 99.90th=[ 1303], 99.95th=[ 1418], 00:18:48.144 | 99.99th=[ 1631] 00:18:48.144 write: IOPS=830, BW=55.1MiB/s (57.8MB/s)(256MiB/4645msec); 0 zone resets 00:18:48.144 slat (nsec): min=13579, max=71666, avg=21023.49, stdev=6211.05 00:18:48.144 clat (usec): min=292, max=2548, avg=623.02, stdev=254.97 00:18:48.144 lat (usec): min=309, max=2572, avg=644.04, stdev=257.79 00:18:48.144 clat percentiles (usec): 00:18:48.144 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 363], 00:18:48.144 | 30.00th=[ 474], 40.00th=[ 545], 50.00th=[ 603], 60.00th=[ 627], 00:18:48.144 | 70.00th=[ 668], 80.00th=[ 807], 90.00th=[ 988], 95.00th=[ 1045], 00:18:48.144 | 99.00th=[ 1565], 99.50th=[ 1778], 99.90th=[ 2008], 99.95th=[ 2507], 00:18:48.144 | 99.99th=[ 2540] 00:18:48.144 bw ( KiB/s): min=38488, max=81056, per=98.54%, avg=55624.00, stdev=14128.14, samples=9 00:18:48.144 iops : min= 566, max= 1192, avg=818.00, stdev=207.77, samples=9 00:18:48.144 lat (usec) : 500=41.63%, 750=38.39%, 1000=14.24% 00:18:48.144 lat (msec) : 2=5.68%, 4=0.05% 00:18:48.144 cpu : usr=99.16%, sys=0.04%, ctx=7, majf=0, minf=1169 00:18:48.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:48.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.144 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:48.144 00:18:48.144 Run status group 0 (all jobs): 00:18:48.144 READ: bw=54.8MiB/s (57.4MB/s), 54.8MiB/s-54.8MiB/s (57.4MB/s-57.4MB/s), io=255MiB (267MB), run=4648-4648msec 00:18:48.144 WRITE: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=256MiB (269MB), run=4645-4645msec 00:18:48.715 ----------------------------------------------------- 00:18:48.715 Suppressions used: 00:18:48.715 count bytes template 00:18:48.715 1 5 /usr/src/fio/parse.c 00:18:48.715 1 8 libtcmalloc_minimal.so 00:18:48.715 1 904 libcrypto.so 00:18:48.715 ----------------------------------------------------- 00:18:48.715 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:48.977 17:49:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:49.238 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:49.238 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:49.238 fio-3.35 00:18:49.238 Starting 2 threads 00:19:15.897 00:19:15.897 first_half: (groupid=0, jobs=1): err= 0: pid=75614: Wed Nov 20 17:49:37 2024 00:19:15.897 read: IOPS=2725, BW=10.6MiB/s (11.2MB/s)(255MiB/23942msec) 00:19:15.897 slat (nsec): min=3024, max=32600, avg=4185.40, stdev=1088.05 00:19:15.897 clat (usec): min=565, max=476110, avg=34116.74, stdev=22526.43 00:19:15.897 lat (usec): min=569, max=476116, avg=34120.92, stdev=22526.51 00:19:15.897 clat percentiles (msec): 00:19:15.897 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 29], 00:19:15.897 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 32], 00:19:15.897 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 40], 95.00th=[ 45], 00:19:15.897 | 99.00th=[ 148], 99.50th=[ 169], 99.90th=[ 305], 99.95th=[ 422], 00:19:15.897 | 99.99th=[ 468] 00:19:15.897 write: IOPS=3194, BW=12.5MiB/s (13.1MB/s)(256MiB/20513msec); 0 zone resets 00:19:15.897 slat (usec): min=3, max=4071, avg= 6.28, stdev=34.17 00:19:15.897 clat (usec): min=349, max=94798, avg=12767.03, stdev=19879.83 00:19:15.897 lat (usec): min=355, max=94804, avg=12773.31, stdev=19880.00 00:19:15.897 clat percentiles (usec): 00:19:15.897 | 1.00th=[ 709], 5.00th=[ 955], 10.00th=[ 1139], 20.00th=[ 1500], 00:19:15.897 | 30.00th=[ 3064], 40.00th=[ 4424], 50.00th=[ 5276], 60.00th=[ 6063], 00:19:15.897 | 70.00th=[ 8160], 80.00th=[15270], 90.00th=[35914], 95.00th=[70779], 00:19:15.897 | 99.00th=[76022], 99.50th=[79168], 99.90th=[88605], 99.95th=[90702], 00:19:15.897 | 99.99th=[92799] 00:19:15.897 bw ( KiB/s): min= 928, max=42872, per=89.19%, avg=22795.13, stdev=11441.54, samples=23 00:19:15.897 iops : min= 232, max=10718, avg=5698.70, stdev=2860.38, samples=23 00:19:15.897 lat (usec) : 500=0.03%, 750=0.73%, 1000=2.26% 00:19:15.897 lat (msec) : 2=9.15%, 4=6.71%, 10=19.56%, 20=5.88%, 50=49.07% 00:19:15.897 lat (msec) : 100=5.50%, 250=1.04%, 500=0.07% 00:19:15.897 cpu : usr=99.21%, sys=0.31%, ctx=68, majf=0, minf=5573 00:19:15.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:15.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:15.897 issued rwts: total=65243,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:15.897 second_half: (groupid=0, jobs=1): err= 0: pid=75615: Wed Nov 20 17:49:37 2024 00:19:15.897 read: IOPS=2739, BW=10.7MiB/s (11.2MB/s)(254MiB/23781msec) 00:19:15.897 slat (nsec): min=2998, max=51879, avg=4230.98, stdev=1217.94 00:19:15.897 clat (usec): min=687, max=437195, avg=34848.20, stdev=20187.57 00:19:15.897 lat (usec): min=691, max=437200, avg=34852.43, stdev=20187.63 00:19:15.897 clat percentiles (msec): 00:19:15.897 | 1.00th=[ 5], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:19:15.897 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 32], 00:19:15.897 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 48], 00:19:15.897 | 99.00th=[ 146], 99.50th=[ 165], 99.90th=[ 222], 99.95th=[ 279], 00:19:15.897 | 99.99th=[ 405] 00:19:15.897 write: IOPS=3777, BW=14.8MiB/s (15.5MB/s)(256MiB/17349msec); 0 zone resets 00:19:15.897 slat (usec): min=3, max=2634, avg= 6.27, stdev=14.78 00:19:15.897 clat (usec): min=343, max=94147, avg=11781.96, stdev=19416.06 00:19:15.897 lat (usec): min=349, max=94157, avg=11788.24, stdev=19416.22 00:19:15.897 clat percentiles (usec): 00:19:15.897 | 1.00th=[ 742], 5.00th=[ 971], 10.00th=[ 1123], 20.00th=[ 1352], 00:19:15.897 | 30.00th=[ 1696], 40.00th=[ 3589], 50.00th=[ 4817], 60.00th=[ 5866], 00:19:15.897 | 70.00th=[ 7373], 80.00th=[14091], 90.00th=[23987], 95.00th=[69731], 00:19:15.897 | 99.00th=[76022], 99.50th=[79168], 99.90th=[88605], 99.95th=[91751], 00:19:15.897 | 99.99th=[92799] 00:19:15.897 bw ( KiB/s): min= 216, max=41384, per=93.24%, avg=23831.27, stdev=13341.75, samples=22 00:19:15.897 iops : min= 54, max=10346, avg=5957.82, stdev=3335.44, samples=22 00:19:15.897 lat (usec) : 500=0.02%, 750=0.55%, 1000=2.38% 00:19:15.897 lat (msec) : 2=13.45%, 4=5.10%, 10=15.96%, 20=7.13%, 50=48.60% 00:19:15.897 lat (msec) : 100=5.72%, 250=1.06%, 500=0.03% 00:19:15.897 cpu : usr=99.21%, sys=0.12%, ctx=32, majf=0, minf=5546 00:19:15.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:15.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.897 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:15.897 issued rwts: total=65142,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:15.897 00:19:15.897 Run status group 0 (all jobs): 00:19:15.897 READ: bw=21.3MiB/s (22.3MB/s), 10.6MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=509MiB (534MB), run=23781-23942msec 00:19:15.897 WRITE: bw=25.0MiB/s (26.2MB/s), 12.5MiB/s-14.8MiB/s (13.1MB/s-15.5MB/s), io=512MiB (537MB), run=17349-20513msec 00:19:15.897 ----------------------------------------------------- 00:19:15.897 Suppressions used: 00:19:15.897 count bytes template 00:19:15.897 2 10 /usr/src/fio/parse.c 00:19:15.897 3 288 /usr/src/fio/iolog.c 00:19:15.897 1 8 libtcmalloc_minimal.so 00:19:15.897 1 904 libcrypto.so 00:19:15.897 ----------------------------------------------------- 00:19:15.897 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:15.897 17:49:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:15.897 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:15.897 fio-3.35 00:19:15.897 Starting 1 thread 00:19:30.857 00:19:30.857 test: (groupid=0, jobs=1): err= 0: pid=75922: Wed Nov 20 17:49:53 2024 00:19:30.857 read: IOPS=7523, BW=29.4MiB/s (30.8MB/s)(255MiB/8666msec) 00:19:30.857 slat (nsec): min=3083, max=72565, avg=4494.76, stdev=1326.62 00:19:30.857 clat (usec): min=473, max=32755, avg=17003.68, stdev=3091.98 00:19:30.857 lat (usec): min=477, max=32760, avg=17008.18, stdev=3092.15 00:19:30.857 clat percentiles (usec): 00:19:30.857 | 1.00th=[13042], 5.00th=[13829], 10.00th=[14091], 20.00th=[15008], 00:19:30.857 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15664], 60.00th=[16057], 00:19:30.857 | 70.00th=[17433], 80.00th=[19792], 90.00th=[21627], 95.00th=[22938], 00:19:30.857 | 99.00th=[27395], 99.50th=[28181], 99.90th=[30540], 99.95th=[31065], 00:19:30.857 | 99.99th=[31589] 00:19:30.857 write: IOPS=15.1k, BW=59.0MiB/s (61.9MB/s)(256MiB/4337msec); 0 zone resets 00:19:30.857 slat (usec): min=4, max=297, avg= 5.80, stdev= 2.43 00:19:30.857 clat (usec): min=499, max=39173, avg=8427.45, stdev=10098.55 00:19:30.857 lat (usec): min=506, max=39179, avg=8433.25, stdev=10098.50 00:19:30.857 clat percentiles (usec): 00:19:30.857 | 1.00th=[ 660], 5.00th=[ 791], 10.00th=[ 898], 20.00th=[ 1037], 00:19:30.857 | 30.00th=[ 1172], 40.00th=[ 1549], 50.00th=[ 5669], 60.00th=[ 6652], 00:19:30.857 | 70.00th=[ 8160], 80.00th=[10290], 90.00th=[29492], 95.00th=[31851], 00:19:30.857 | 99.00th=[33817], 99.50th=[34341], 99.90th=[36439], 99.95th=[37487], 00:19:30.857 | 99.99th=[38011] 00:19:30.857 bw ( KiB/s): min=40328, max=82384, per=96.38%, avg=58254.22, stdev=11699.23, samples=9 00:19:30.857 iops : min=10082, max=20596, avg=14563.56, stdev=2924.81, samples=9 00:19:30.857 lat (usec) : 500=0.01%, 750=1.72%, 1000=7.01% 00:19:30.857 lat (msec) : 2=11.88%, 4=0.56%, 10=18.51%, 20=43.24%, 50=17.07% 00:19:30.857 cpu : usr=99.15%, sys=0.13%, ctx=24, majf=0, minf=5565 00:19:30.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:30.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.857 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.857 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.857 00:19:30.857 Run status group 0 (all jobs): 00:19:30.857 READ: bw=29.4MiB/s (30.8MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=255MiB (267MB), run=8666-8666msec 00:19:30.857 WRITE: bw=59.0MiB/s (61.9MB/s), 59.0MiB/s-59.0MiB/s (61.9MB/s-61.9MB/s), io=256MiB (268MB), run=4337-4337msec 00:19:31.799 ----------------------------------------------------- 00:19:31.799 Suppressions used: 00:19:31.799 count bytes template 00:19:31.799 1 5 /usr/src/fio/parse.c 00:19:31.799 2 192 /usr/src/fio/iolog.c 00:19:31.799 1 8 libtcmalloc_minimal.so 00:19:31.799 1 904 libcrypto.so 00:19:31.799 ----------------------------------------------------- 00:19:31.799 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:31.799 Remove shared memory files 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57169 /dev/shm/spdk_tgt_trace.pid74222 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:31.799 ************************************ 00:19:31.799 END TEST ftl_fio_basic 00:19:31.799 ************************************ 00:19:31.799 00:19:31.799 real 1m6.049s 00:19:31.799 user 2m26.211s 00:19:31.799 sys 0m2.962s 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.799 17:49:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:31.799 17:49:55 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:31.799 17:49:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:31.799 17:49:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.799 17:49:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:31.799 ************************************ 00:19:31.799 START TEST ftl_bdevperf 00:19:31.799 ************************************ 00:19:31.799 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:32.060 * Looking for test storage... 00:19:32.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:32.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.060 --rc genhtml_branch_coverage=1 00:19:32.060 --rc genhtml_function_coverage=1 00:19:32.060 --rc genhtml_legend=1 00:19:32.060 --rc geninfo_all_blocks=1 00:19:32.060 --rc geninfo_unexecuted_blocks=1 00:19:32.060 00:19:32.060 ' 00:19:32.060 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:32.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.060 --rc genhtml_branch_coverage=1 00:19:32.060 --rc genhtml_function_coverage=1 00:19:32.060 --rc genhtml_legend=1 00:19:32.061 --rc geninfo_all_blocks=1 00:19:32.061 --rc geninfo_unexecuted_blocks=1 00:19:32.061 00:19:32.061 ' 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:32.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.061 --rc genhtml_branch_coverage=1 00:19:32.061 --rc genhtml_function_coverage=1 00:19:32.061 --rc genhtml_legend=1 00:19:32.061 --rc geninfo_all_blocks=1 00:19:32.061 --rc geninfo_unexecuted_blocks=1 00:19:32.061 00:19:32.061 ' 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:32.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.061 --rc genhtml_branch_coverage=1 00:19:32.061 --rc genhtml_function_coverage=1 00:19:32.061 --rc genhtml_legend=1 00:19:32.061 --rc geninfo_all_blocks=1 00:19:32.061 --rc geninfo_unexecuted_blocks=1 00:19:32.061 00:19:32.061 ' 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76155 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76155 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76155 ']' 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.061 17:49:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:32.061 [2024-11-20 17:49:55.567199] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:19:32.061 [2024-11-20 17:49:55.567571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76155 ] 00:19:32.322 [2024-11-20 17:49:55.733784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.322 [2024-11-20 17:49:55.854121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.893 17:49:56 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.893 17:49:56 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:32.893 17:49:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:32.893 17:49:56 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:32.893 17:49:56 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:32.894 17:49:56 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:32.894 17:49:56 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:33.154 17:49:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:33.415 17:49:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:33.415 17:49:56 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:33.415 17:49:56 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:33.415 17:49:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:33.415 17:49:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:33.415 17:49:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:33.415 17:49:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:33.415 17:49:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:33.415 17:49:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:33.415 { 00:19:33.415 "name": "nvme0n1", 00:19:33.415 "aliases": [ 00:19:33.415 "e7c1ff35-7bbb-405a-810c-7cc787526cc6" 00:19:33.415 ], 00:19:33.415 "product_name": "NVMe disk", 00:19:33.415 "block_size": 4096, 00:19:33.415 "num_blocks": 1310720, 00:19:33.415 "uuid": "e7c1ff35-7bbb-405a-810c-7cc787526cc6", 00:19:33.415 "numa_id": -1, 00:19:33.415 "assigned_rate_limits": { 00:19:33.415 "rw_ios_per_sec": 0, 00:19:33.415 "rw_mbytes_per_sec": 0, 00:19:33.415 "r_mbytes_per_sec": 0, 00:19:33.415 "w_mbytes_per_sec": 0 00:19:33.415 }, 00:19:33.415 "claimed": true, 00:19:33.415 "claim_type": "read_many_write_one", 00:19:33.415 "zoned": false, 00:19:33.415 "supported_io_types": { 00:19:33.415 "read": true, 00:19:33.415 "write": true, 00:19:33.415 "unmap": true, 00:19:33.415 "flush": true, 00:19:33.415 "reset": true, 00:19:33.415 "nvme_admin": true, 00:19:33.415 "nvme_io": true, 00:19:33.415 "nvme_io_md": false, 00:19:33.415 "write_zeroes": true, 00:19:33.415 "zcopy": false, 00:19:33.415 "get_zone_info": false, 00:19:33.415 "zone_management": false, 00:19:33.415 "zone_append": false, 00:19:33.415 "compare": true, 00:19:33.415 "compare_and_write": false, 00:19:33.415 "abort": true, 00:19:33.415 "seek_hole": false, 00:19:33.415 "seek_data": false, 00:19:33.415 "copy": true, 00:19:33.415 "nvme_iov_md": false 00:19:33.415 }, 00:19:33.415 "driver_specific": { 00:19:33.415 "nvme": [ 00:19:33.415 { 00:19:33.415 "pci_address": "0000:00:11.0", 00:19:33.415 "trid": { 00:19:33.415 "trtype": "PCIe", 00:19:33.415 "traddr": "0000:00:11.0" 00:19:33.415 }, 00:19:33.415 "ctrlr_data": { 00:19:33.415 "cntlid": 0, 00:19:33.415 "vendor_id": "0x1b36", 00:19:33.415 "model_number": "QEMU NVMe Ctrl", 00:19:33.415 "serial_number": "12341", 00:19:33.415 "firmware_revision": "8.0.0", 00:19:33.415 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:33.415 "oacs": { 00:19:33.415 "security": 0, 00:19:33.415 "format": 1, 00:19:33.415 "firmware": 0, 00:19:33.415 "ns_manage": 1 00:19:33.415 }, 00:19:33.415 "multi_ctrlr": false, 00:19:33.415 "ana_reporting": false 00:19:33.415 }, 00:19:33.415 "vs": { 00:19:33.415 "nvme_version": "1.4" 00:19:33.415 }, 00:19:33.415 "ns_data": { 00:19:33.415 "id": 1, 00:19:33.415 "can_share": false 00:19:33.415 } 00:19:33.415 } 00:19:33.415 ], 00:19:33.415 "mp_policy": "active_passive" 00:19:33.415 } 00:19:33.415 } 00:19:33.415 ]' 00:19:33.415 17:49:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:33.676 17:49:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:33.676 17:49:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:33.676 17:49:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:33.676 17:49:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:33.676 17:49:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:33.676 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:33.676 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:33.676 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:33.676 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:33.676 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:33.936 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=3d345486-9a0d-4075-a76c-c9ab24bd6228 00:19:33.936 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:33.936 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3d345486-9a0d-4075-a76c-c9ab24bd6228 00:19:34.197 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:34.197 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=eeed38ef-1854-42ae-9164-4fe0a8018938 00:19:34.197 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eeed38ef-1854-42ae-9164-4fe0a8018938 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=aae41064-aa70-4944-b65f-a219868e1ad4 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 aae41064-aa70-4944-b65f-a219868e1ad4 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=aae41064-aa70-4944-b65f-a219868e1ad4 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size aae41064-aa70-4944-b65f-a219868e1ad4 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=aae41064-aa70-4944-b65f-a219868e1ad4 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:34.457 17:49:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aae41064-aa70-4944-b65f-a219868e1ad4 00:19:34.715 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:34.715 { 00:19:34.715 "name": "aae41064-aa70-4944-b65f-a219868e1ad4", 00:19:34.715 "aliases": [ 00:19:34.715 "lvs/nvme0n1p0" 00:19:34.715 ], 00:19:34.715 "product_name": "Logical Volume", 00:19:34.715 "block_size": 4096, 00:19:34.715 "num_blocks": 26476544, 00:19:34.715 "uuid": "aae41064-aa70-4944-b65f-a219868e1ad4", 00:19:34.715 "assigned_rate_limits": { 00:19:34.715 "rw_ios_per_sec": 0, 00:19:34.715 "rw_mbytes_per_sec": 0, 00:19:34.715 "r_mbytes_per_sec": 0, 00:19:34.715 "w_mbytes_per_sec": 0 00:19:34.715 }, 00:19:34.715 "claimed": false, 00:19:34.715 "zoned": false, 00:19:34.715 "supported_io_types": { 00:19:34.715 "read": true, 00:19:34.715 "write": true, 00:19:34.715 "unmap": true, 00:19:34.715 "flush": false, 00:19:34.715 "reset": true, 00:19:34.715 "nvme_admin": false, 00:19:34.715 "nvme_io": false, 00:19:34.715 "nvme_io_md": false, 00:19:34.715 "write_zeroes": true, 00:19:34.715 "zcopy": false, 00:19:34.715 "get_zone_info": false, 00:19:34.715 "zone_management": false, 00:19:34.715 "zone_append": false, 00:19:34.715 "compare": false, 00:19:34.715 "compare_and_write": false, 00:19:34.715 "abort": false, 00:19:34.715 "seek_hole": true, 00:19:34.715 "seek_data": true, 00:19:34.715 "copy": false, 00:19:34.715 "nvme_iov_md": false 00:19:34.715 }, 00:19:34.715 "driver_specific": { 00:19:34.715 "lvol": { 00:19:34.715 "lvol_store_uuid": "eeed38ef-1854-42ae-9164-4fe0a8018938", 00:19:34.715 "base_bdev": "nvme0n1", 00:19:34.715 "thin_provision": true, 00:19:34.715 "num_allocated_clusters": 0, 00:19:34.715 "snapshot": false, 00:19:34.715 "clone": false, 00:19:34.715 "esnap_clone": false 00:19:34.715 } 00:19:34.715 } 00:19:34.715 } 00:19:34.715 ]' 00:19:34.715 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:34.715 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:34.715 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:34.715 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:34.715 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:34.715 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:34.715 17:49:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:34.715 17:49:58 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:34.715 17:49:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:34.975 17:49:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:34.975 17:49:58 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:34.975 17:49:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size aae41064-aa70-4944-b65f-a219868e1ad4 00:19:34.975 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=aae41064-aa70-4944-b65f-a219868e1ad4 00:19:34.975 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:34.975 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:34.975 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:34.975 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aae41064-aa70-4944-b65f-a219868e1ad4 00:19:35.236 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:35.236 { 00:19:35.236 "name": "aae41064-aa70-4944-b65f-a219868e1ad4", 00:19:35.236 "aliases": [ 00:19:35.236 "lvs/nvme0n1p0" 00:19:35.236 ], 00:19:35.236 "product_name": "Logical Volume", 00:19:35.236 "block_size": 4096, 00:19:35.236 "num_blocks": 26476544, 00:19:35.236 "uuid": "aae41064-aa70-4944-b65f-a219868e1ad4", 00:19:35.236 "assigned_rate_limits": { 00:19:35.236 "rw_ios_per_sec": 0, 00:19:35.236 "rw_mbytes_per_sec": 0, 00:19:35.236 "r_mbytes_per_sec": 0, 00:19:35.236 "w_mbytes_per_sec": 0 00:19:35.236 }, 00:19:35.236 "claimed": false, 00:19:35.236 "zoned": false, 00:19:35.236 "supported_io_types": { 00:19:35.236 "read": true, 00:19:35.236 "write": true, 00:19:35.236 "unmap": true, 00:19:35.236 "flush": false, 00:19:35.236 "reset": true, 00:19:35.236 "nvme_admin": false, 00:19:35.236 "nvme_io": false, 00:19:35.236 "nvme_io_md": false, 00:19:35.236 "write_zeroes": true, 00:19:35.236 "zcopy": false, 00:19:35.236 "get_zone_info": false, 00:19:35.236 "zone_management": false, 00:19:35.236 "zone_append": false, 00:19:35.236 "compare": false, 00:19:35.236 "compare_and_write": false, 00:19:35.236 "abort": false, 00:19:35.236 "seek_hole": true, 00:19:35.236 "seek_data": true, 00:19:35.236 "copy": false, 00:19:35.236 "nvme_iov_md": false 00:19:35.236 }, 00:19:35.236 "driver_specific": { 00:19:35.236 "lvol": { 00:19:35.236 "lvol_store_uuid": "eeed38ef-1854-42ae-9164-4fe0a8018938", 00:19:35.236 "base_bdev": "nvme0n1", 00:19:35.236 "thin_provision": true, 00:19:35.236 "num_allocated_clusters": 0, 00:19:35.236 "snapshot": false, 00:19:35.236 "clone": false, 00:19:35.236 "esnap_clone": false 00:19:35.236 } 00:19:35.236 } 00:19:35.236 } 00:19:35.236 ]' 00:19:35.236 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:35.236 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:35.236 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:35.236 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:35.236 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:35.236 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:35.236 17:49:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:35.236 17:49:58 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:35.497 17:49:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:35.497 17:49:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size aae41064-aa70-4944-b65f-a219868e1ad4 00:19:35.497 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=aae41064-aa70-4944-b65f-a219868e1ad4 00:19:35.497 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:35.497 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:35.497 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:35.497 17:49:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aae41064-aa70-4944-b65f-a219868e1ad4 00:19:35.768 17:49:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:35.768 { 00:19:35.768 "name": "aae41064-aa70-4944-b65f-a219868e1ad4", 00:19:35.768 "aliases": [ 00:19:35.768 "lvs/nvme0n1p0" 00:19:35.768 ], 00:19:35.768 "product_name": "Logical Volume", 00:19:35.768 "block_size": 4096, 00:19:35.768 "num_blocks": 26476544, 00:19:35.768 "uuid": "aae41064-aa70-4944-b65f-a219868e1ad4", 00:19:35.769 "assigned_rate_limits": { 00:19:35.769 "rw_ios_per_sec": 0, 00:19:35.769 "rw_mbytes_per_sec": 0, 00:19:35.769 "r_mbytes_per_sec": 0, 00:19:35.769 "w_mbytes_per_sec": 0 00:19:35.769 }, 00:19:35.769 "claimed": false, 00:19:35.769 "zoned": false, 00:19:35.769 "supported_io_types": { 00:19:35.769 "read": true, 00:19:35.769 "write": true, 00:19:35.769 "unmap": true, 00:19:35.769 "flush": false, 00:19:35.769 "reset": true, 00:19:35.769 "nvme_admin": false, 00:19:35.769 "nvme_io": false, 00:19:35.769 "nvme_io_md": false, 00:19:35.769 "write_zeroes": true, 00:19:35.769 "zcopy": false, 00:19:35.769 "get_zone_info": false, 00:19:35.769 "zone_management": false, 00:19:35.769 "zone_append": false, 00:19:35.769 "compare": false, 00:19:35.769 "compare_and_write": false, 00:19:35.769 "abort": false, 00:19:35.769 "seek_hole": true, 00:19:35.769 "seek_data": true, 00:19:35.769 "copy": false, 00:19:35.769 "nvme_iov_md": false 00:19:35.769 }, 00:19:35.769 "driver_specific": { 00:19:35.769 "lvol": { 00:19:35.769 "lvol_store_uuid": "eeed38ef-1854-42ae-9164-4fe0a8018938", 00:19:35.769 "base_bdev": "nvme0n1", 00:19:35.769 "thin_provision": true, 00:19:35.769 "num_allocated_clusters": 0, 00:19:35.769 "snapshot": false, 00:19:35.769 "clone": false, 00:19:35.769 "esnap_clone": false 00:19:35.769 } 00:19:35.769 } 00:19:35.769 } 00:19:35.769 ]' 00:19:35.769 17:49:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:35.769 17:49:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:35.769 17:49:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:35.769 17:49:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:35.769 17:49:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:35.769 17:49:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:35.769 17:49:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:35.769 17:49:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aae41064-aa70-4944-b65f-a219868e1ad4 -c nvc0n1p0 --l2p_dram_limit 20 00:19:36.047 [2024-11-20 17:49:59.391962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.047 [2024-11-20 17:49:59.392359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:36.047 [2024-11-20 17:49:59.392430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:36.047 [2024-11-20 17:49:59.392466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.047 [2024-11-20 17:49:59.392554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.047 [2024-11-20 17:49:59.392646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.047 [2024-11-20 17:49:59.392691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:36.047 [2024-11-20 17:49:59.392725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.047 [2024-11-20 17:49:59.392766] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:36.047 [2024-11-20 17:49:59.393386] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:36.047 [2024-11-20 17:49:59.393506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.047 [2024-11-20 17:49:59.393559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.047 [2024-11-20 17:49:59.393607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:19:36.047 [2024-11-20 17:49:59.393706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.047 [2024-11-20 17:49:59.393805] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID deeb0d90-fd4b-4098-a0ad-04df35f3f20a 00:19:36.047 [2024-11-20 17:49:59.394901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.047 [2024-11-20 17:49:59.395025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:36.047 [2024-11-20 17:49:59.395109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:36.047 [2024-11-20 17:49:59.395188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.047 [2024-11-20 17:49:59.400017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.047 [2024-11-20 17:49:59.400137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.047 [2024-11-20 17:49:59.400219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.759 ms 00:19:36.047 [2024-11-20 17:49:59.400301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.047 [2024-11-20 17:49:59.400404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.047 [2024-11-20 17:49:59.400476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.047 [2024-11-20 17:49:59.400558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:36.047 [2024-11-20 17:49:59.400604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.048 [2024-11-20 17:49:59.400666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.048 [2024-11-20 17:49:59.400739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:36.048 [2024-11-20 17:49:59.400796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:36.048 [2024-11-20 17:49:59.400827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.048 [2024-11-20 17:49:59.400935] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.048 [2024-11-20 17:49:59.403853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.048 [2024-11-20 17:49:59.403935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.048 [2024-11-20 17:49:59.403970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.926 ms 00:19:36.048 [2024-11-20 17:49:59.404008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.048 [2024-11-20 17:49:59.404057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.048 [2024-11-20 17:49:59.404088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:36.048 [2024-11-20 17:49:59.404120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:36.048 [2024-11-20 17:49:59.404154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.048 [2024-11-20 17:49:59.404211] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:36.048 [2024-11-20 17:49:59.404351] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:36.048 [2024-11-20 17:49:59.404434] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:36.048 [2024-11-20 17:49:59.404543] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:36.048 [2024-11-20 17:49:59.404613] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:36.048 [2024-11-20 17:49:59.404648] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:36.048 [2024-11-20 17:49:59.404710] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:36.048 [2024-11-20 17:49:59.404751] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:36.048 [2024-11-20 17:49:59.404780] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:36.048 [2024-11-20 17:49:59.404846] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:36.048 [2024-11-20 17:49:59.404900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.048 [2024-11-20 17:49:59.404932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:36.048 [2024-11-20 17:49:59.404966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:19:36.048 [2024-11-20 17:49:59.405045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.048 [2024-11-20 17:49:59.405137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.048 [2024-11-20 17:49:59.405203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:36.048 [2024-11-20 17:49:59.405256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:36.048 [2024-11-20 17:49:59.405293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.048 [2024-11-20 17:49:59.405426] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:36.048 [2024-11-20 17:49:59.405473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:36.048 [2024-11-20 17:49:59.405505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.048 [2024-11-20 17:49:59.405589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.048 [2024-11-20 17:49:59.405630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:36.048 [2024-11-20 17:49:59.405662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:36.048 [2024-11-20 17:49:59.405690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:36.048 [2024-11-20 17:49:59.405720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:36.048 [2024-11-20 17:49:59.405794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:36.048 [2024-11-20 17:49:59.405836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.048 [2024-11-20 17:49:59.405866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:36.048 [2024-11-20 17:49:59.405908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:36.048 [2024-11-20 17:49:59.405937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.048 [2024-11-20 17:49:59.406006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:36.048 [2024-11-20 17:49:59.406044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:36.048 [2024-11-20 17:49:59.406077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.048 [2024-11-20 17:49:59.406103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:36.048 [2024-11-20 17:49:59.406170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:36.048 [2024-11-20 17:49:59.406206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.048 [2024-11-20 17:49:59.406215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:36.048 [2024-11-20 17:49:59.406221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:36.048 [2024-11-20 17:49:59.406227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.048 [2024-11-20 17:49:59.406232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:36.048 [2024-11-20 17:49:59.406239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:36.048 [2024-11-20 17:49:59.406244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.048 [2024-11-20 17:49:59.406251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:36.048 [2024-11-20 17:49:59.406256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:36.048 [2024-11-20 17:49:59.406262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.048 [2024-11-20 17:49:59.406267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:36.048 [2024-11-20 17:49:59.406273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:36.048 [2024-11-20 17:49:59.406278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.048 [2024-11-20 17:49:59.406286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:36.048 [2024-11-20 17:49:59.406291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:36.048 [2024-11-20 17:49:59.406297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.048 [2024-11-20 17:49:59.406302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:36.048 [2024-11-20 17:49:59.406309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:36.048 [2024-11-20 17:49:59.406313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.048 [2024-11-20 17:49:59.406319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:36.048 [2024-11-20 17:49:59.406324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:36.048 [2024-11-20 17:49:59.406330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.048 [2024-11-20 17:49:59.406335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:36.048 [2024-11-20 17:49:59.406342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:36.048 [2024-11-20 17:49:59.406346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.048 [2024-11-20 17:49:59.406352] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:36.048 [2024-11-20 17:49:59.406359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:36.048 [2024-11-20 17:49:59.406366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.048 [2024-11-20 17:49:59.406372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.048 [2024-11-20 17:49:59.406380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:36.048 [2024-11-20 17:49:59.406385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:36.048 [2024-11-20 17:49:59.406391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:36.048 [2024-11-20 17:49:59.406396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:36.048 [2024-11-20 17:49:59.406403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:36.048 [2024-11-20 17:49:59.406409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:36.048 [2024-11-20 17:49:59.406419] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:36.048 [2024-11-20 17:49:59.406427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.048 [2024-11-20 17:49:59.406435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:36.048 [2024-11-20 17:49:59.406440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:36.048 [2024-11-20 17:49:59.406447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:36.048 [2024-11-20 17:49:59.406453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:36.048 [2024-11-20 17:49:59.406459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:36.048 [2024-11-20 17:49:59.406465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:36.048 [2024-11-20 17:49:59.406480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:36.048 [2024-11-20 17:49:59.406486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:36.048 [2024-11-20 17:49:59.406494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:36.048 [2024-11-20 17:49:59.406500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:36.048 [2024-11-20 17:49:59.406506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:36.048 [2024-11-20 17:49:59.406512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:36.048 [2024-11-20 17:49:59.406518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:36.048 [2024-11-20 17:49:59.406524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:36.049 [2024-11-20 17:49:59.406532] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:36.049 [2024-11-20 17:49:59.406538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.049 [2024-11-20 17:49:59.406545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:36.049 [2024-11-20 17:49:59.406551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:36.049 [2024-11-20 17:49:59.406557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:36.049 [2024-11-20 17:49:59.406563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:36.049 [2024-11-20 17:49:59.406570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.049 [2024-11-20 17:49:59.406577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:36.049 [2024-11-20 17:49:59.406585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:19:36.049 [2024-11-20 17:49:59.406590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.049 [2024-11-20 17:49:59.406620] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:36.049 [2024-11-20 17:49:59.406627] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:39.343 [2024-11-20 17:50:02.541255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.541423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:39.343 [2024-11-20 17:50:02.541451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3134.625 ms 00:19:39.343 [2024-11-20 17:50:02.541460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.567080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.567118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:39.343 [2024-11-20 17:50:02.567132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.419 ms 00:19:39.343 [2024-11-20 17:50:02.567140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.567260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.567270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:39.343 [2024-11-20 17:50:02.567282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:39.343 [2024-11-20 17:50:02.567289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.609530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.609567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:39.343 [2024-11-20 17:50:02.609583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.207 ms 00:19:39.343 [2024-11-20 17:50:02.609591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.609625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.609637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.343 [2024-11-20 17:50:02.609647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:39.343 [2024-11-20 17:50:02.609654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.610049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.610065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.343 [2024-11-20 17:50:02.610077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:19:39.343 [2024-11-20 17:50:02.610085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.610195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.610204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.343 [2024-11-20 17:50:02.610215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:39.343 [2024-11-20 17:50:02.610222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.623179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.623207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:39.343 [2024-11-20 17:50:02.623218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.941 ms 00:19:39.343 [2024-11-20 17:50:02.623226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.634504] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:39.343 [2024-11-20 17:50:02.639591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.639623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:39.343 [2024-11-20 17:50:02.639633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.302 ms 00:19:39.343 [2024-11-20 17:50:02.639642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.716461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.716503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:39.343 [2024-11-20 17:50:02.716515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.799 ms 00:19:39.343 [2024-11-20 17:50:02.716524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.716694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.716709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:39.343 [2024-11-20 17:50:02.716718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:19:39.343 [2024-11-20 17:50:02.716726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.740601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.740734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:39.343 [2024-11-20 17:50:02.740752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.822 ms 00:19:39.343 [2024-11-20 17:50:02.740762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.763710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.763848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:39.343 [2024-11-20 17:50:02.763867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.675 ms 00:19:39.343 [2024-11-20 17:50:02.763895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.764489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.764513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:39.343 [2024-11-20 17:50:02.764522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:19:39.343 [2024-11-20 17:50:02.764531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.836521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.836562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:39.343 [2024-11-20 17:50:02.836573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.959 ms 00:19:39.343 [2024-11-20 17:50:02.836583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.343 [2024-11-20 17:50:02.860967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.343 [2024-11-20 17:50:02.861002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:39.343 [2024-11-20 17:50:02.861014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.321 ms 00:19:39.343 [2024-11-20 17:50:02.861024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.602 [2024-11-20 17:50:02.884697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.602 [2024-11-20 17:50:02.884740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:39.602 [2024-11-20 17:50:02.884750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.643 ms 00:19:39.602 [2024-11-20 17:50:02.884759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.602 [2024-11-20 17:50:02.909063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.602 [2024-11-20 17:50:02.909099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:39.602 [2024-11-20 17:50:02.909111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.274 ms 00:19:39.602 [2024-11-20 17:50:02.909122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.602 [2024-11-20 17:50:02.909157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.602 [2024-11-20 17:50:02.909170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:39.602 [2024-11-20 17:50:02.909178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:39.602 [2024-11-20 17:50:02.909187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.602 [2024-11-20 17:50:02.909260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.602 [2024-11-20 17:50:02.909271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:39.602 [2024-11-20 17:50:02.909279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:39.602 [2024-11-20 17:50:02.909288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.602 [2024-11-20 17:50:02.910113] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3517.702 ms, result 0 00:19:39.602 { 00:19:39.602 "name": "ftl0", 00:19:39.602 "uuid": "deeb0d90-fd4b-4098-a0ad-04df35f3f20a" 00:19:39.602 } 00:19:39.602 17:50:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:39.602 17:50:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:39.602 17:50:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:39.602 17:50:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:39.860 [2024-11-20 17:50:03.222525] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:39.860 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:39.860 Zero copy mechanism will not be used. 00:19:39.860 Running I/O for 4 seconds... 00:19:41.729 1422.00 IOPS, 94.43 MiB/s [2024-11-20T17:50:06.689Z] 1337.50 IOPS, 88.82 MiB/s [2024-11-20T17:50:07.256Z] 1226.33 IOPS, 81.44 MiB/s [2024-11-20T17:50:07.256Z] 1261.00 IOPS, 83.74 MiB/s 00:19:43.716 Latency(us) 00:19:43.716 [2024-11-20T17:50:07.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.716 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:43.716 ftl0 : 4.00 1260.56 83.71 0.00 0.00 830.26 178.81 2659.25 00:19:43.716 [2024-11-20T17:50:07.256Z] =================================================================================================================== 00:19:43.716 [2024-11-20T17:50:07.256Z] Total : 1260.56 83.71 0.00 0.00 830.26 178.81 2659.25 00:19:43.716 { 00:19:43.716 "results": [ 00:19:43.716 { 00:19:43.716 "job": "ftl0", 00:19:43.716 "core_mask": "0x1", 00:19:43.716 "workload": "randwrite", 00:19:43.716 "status": "finished", 00:19:43.716 "queue_depth": 1, 00:19:43.716 "io_size": 69632, 00:19:43.716 "runtime": 4.002198, 00:19:43.716 "iops": 1260.5573237505992, 00:19:43.716 "mibps": 83.70888478031323, 00:19:43.716 "io_failed": 0, 00:19:43.716 "io_timeout": 0, 00:19:43.716 "avg_latency_us": 830.2643055576732, 00:19:43.716 "min_latency_us": 178.80615384615385, 00:19:43.716 "max_latency_us": 2659.249230769231 00:19:43.716 } 00:19:43.716 ], 00:19:43.716 "core_count": 1 00:19:43.716 } 00:19:43.716 [2024-11-20 17:50:07.232758] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:43.716 17:50:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:43.974 [2024-11-20 17:50:07.367756] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:43.974 Running I/O for 4 seconds... 00:19:45.865 6486.00 IOPS, 25.34 MiB/s [2024-11-20T17:50:10.795Z] 6094.00 IOPS, 23.80 MiB/s [2024-11-20T17:50:11.740Z] 5601.67 IOPS, 21.88 MiB/s [2024-11-20T17:50:11.740Z] 5328.50 IOPS, 20.81 MiB/s 00:19:48.200 Latency(us) 00:19:48.200 [2024-11-20T17:50:11.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.200 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:48.200 ftl0 : 4.03 5319.08 20.78 0.00 0.00 23964.03 274.12 74206.92 00:19:48.200 [2024-11-20T17:50:11.740Z] =================================================================================================================== 00:19:48.200 [2024-11-20T17:50:11.740Z] Total : 5319.08 20.78 0.00 0.00 23964.03 0.00 74206.92 00:19:48.200 [2024-11-20 17:50:11.408415] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:48.200 { 00:19:48.200 "results": [ 00:19:48.200 { 00:19:48.200 "job": "ftl0", 00:19:48.200 "core_mask": "0x1", 00:19:48.200 "workload": "randwrite", 00:19:48.200 "status": "finished", 00:19:48.200 "queue_depth": 128, 00:19:48.200 "io_size": 4096, 00:19:48.200 "runtime": 4.03115, 00:19:48.200 "iops": 5319.077682547164, 00:19:48.200 "mibps": 20.77764719744986, 00:19:48.200 "io_failed": 0, 00:19:48.200 "io_timeout": 0, 00:19:48.200 "avg_latency_us": 23964.033389537428, 00:19:48.200 "min_latency_us": 274.11692307692306, 00:19:48.200 "max_latency_us": 74206.91692307692 00:19:48.200 } 00:19:48.200 ], 00:19:48.200 "core_count": 1 00:19:48.200 } 00:19:48.200 17:50:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:48.200 [2024-11-20 17:50:11.524435] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:48.200 Running I/O for 4 seconds... 00:19:50.088 4290.00 IOPS, 16.76 MiB/s [2024-11-20T17:50:14.569Z] 4344.50 IOPS, 16.97 MiB/s [2024-11-20T17:50:15.955Z] 4355.33 IOPS, 17.01 MiB/s [2024-11-20T17:50:15.955Z] 4364.50 IOPS, 17.05 MiB/s 00:19:52.415 Latency(us) 00:19:52.415 [2024-11-20T17:50:15.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.415 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:52.415 Verification LBA range: start 0x0 length 0x1400000 00:19:52.415 ftl0 : 4.02 4379.98 17.11 0.00 0.00 29137.55 466.31 40934.79 00:19:52.415 [2024-11-20T17:50:15.955Z] =================================================================================================================== 00:19:52.415 [2024-11-20T17:50:15.955Z] Total : 4379.98 17.11 0.00 0.00 29137.55 0.00 40934.79 00:19:52.415 [2024-11-20 17:50:15.556787] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:52.415 { 00:19:52.415 "results": [ 00:19:52.415 { 00:19:52.415 "job": "ftl0", 00:19:52.415 "core_mask": "0x1", 00:19:52.415 "workload": "verify", 00:19:52.415 "status": "finished", 00:19:52.415 "verify_range": { 00:19:52.415 "start": 0, 00:19:52.415 "length": 20971520 00:19:52.415 }, 00:19:52.415 "queue_depth": 128, 00:19:52.415 "io_size": 4096, 00:19:52.415 "runtime": 4.015083, 00:19:52.415 "iops": 4379.98417467335, 00:19:52.415 "mibps": 17.109313182317774, 00:19:52.415 "io_failed": 0, 00:19:52.415 "io_timeout": 0, 00:19:52.415 "avg_latency_us": 29137.55490958717, 00:19:52.415 "min_latency_us": 466.31384615384616, 00:19:52.415 "max_latency_us": 40934.79384615384 00:19:52.415 } 00:19:52.415 ], 00:19:52.415 "core_count": 1 00:19:52.415 } 00:19:52.415 17:50:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:52.415 [2024-11-20 17:50:15.828130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.415 [2024-11-20 17:50:15.828194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:52.415 [2024-11-20 17:50:15.828210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:52.415 [2024-11-20 17:50:15.828221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.415 [2024-11-20 17:50:15.828244] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.415 [2024-11-20 17:50:15.831274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.415 [2024-11-20 17:50:15.831460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:52.415 [2024-11-20 17:50:15.831487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.007 ms 00:19:52.415 [2024-11-20 17:50:15.831497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.415 [2024-11-20 17:50:15.834654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.415 [2024-11-20 17:50:15.834808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:52.415 [2024-11-20 17:50:15.834840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.118 ms 00:19:52.415 [2024-11-20 17:50:15.834850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.677 [2024-11-20 17:50:16.053380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.677 [2024-11-20 17:50:16.053442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:52.677 [2024-11-20 17:50:16.053465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 218.501 ms 00:19:52.677 [2024-11-20 17:50:16.053475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.677 [2024-11-20 17:50:16.059680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.677 [2024-11-20 17:50:16.059721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:52.677 [2024-11-20 17:50:16.059738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.156 ms 00:19:52.677 [2024-11-20 17:50:16.059751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.677 [2024-11-20 17:50:16.085969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.677 [2024-11-20 17:50:16.086016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:52.677 [2024-11-20 17:50:16.086032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.156 ms 00:19:52.677 [2024-11-20 17:50:16.086040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.677 [2024-11-20 17:50:16.103293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.677 [2024-11-20 17:50:16.103344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:52.677 [2024-11-20 17:50:16.103360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.201 ms 00:19:52.677 [2024-11-20 17:50:16.103369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.677 [2024-11-20 17:50:16.103526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.677 [2024-11-20 17:50:16.103538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:52.677 [2024-11-20 17:50:16.103552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:52.677 [2024-11-20 17:50:16.103560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.677 [2024-11-20 17:50:16.129571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.677 [2024-11-20 17:50:16.129755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:52.677 [2024-11-20 17:50:16.129781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.989 ms 00:19:52.677 [2024-11-20 17:50:16.129789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.677 [2024-11-20 17:50:16.154926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.677 [2024-11-20 17:50:16.154974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:52.677 [2024-11-20 17:50:16.154989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.005 ms 00:19:52.677 [2024-11-20 17:50:16.154997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.677 [2024-11-20 17:50:16.179896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.677 [2024-11-20 17:50:16.180064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:52.677 [2024-11-20 17:50:16.180089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.847 ms 00:19:52.677 [2024-11-20 17:50:16.180097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.677 [2024-11-20 17:50:16.204602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.677 [2024-11-20 17:50:16.204649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:52.677 [2024-11-20 17:50:16.204666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.329 ms 00:19:52.677 [2024-11-20 17:50:16.204674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.677 [2024-11-20 17:50:16.204718] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:52.677 [2024-11-20 17:50:16.204735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.204992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:52.677 [2024-11-20 17:50:16.205127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:52.678 [2024-11-20 17:50:16.205680] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:52.678 [2024-11-20 17:50:16.205689] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: deeb0d90-fd4b-4098-a0ad-04df35f3f20a 00:19:52.678 [2024-11-20 17:50:16.205699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:52.678 [2024-11-20 17:50:16.205709] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:52.678 [2024-11-20 17:50:16.205716] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:52.678 [2024-11-20 17:50:16.205726] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:52.678 [2024-11-20 17:50:16.205734] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:52.678 [2024-11-20 17:50:16.205744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:52.678 [2024-11-20 17:50:16.205751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:52.678 [2024-11-20 17:50:16.205761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:52.678 [2024-11-20 17:50:16.205767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:52.678 [2024-11-20 17:50:16.205776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.678 [2024-11-20 17:50:16.205784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:52.678 [2024-11-20 17:50:16.205795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:19:52.678 [2024-11-20 17:50:16.205802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.219539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.940 [2024-11-20 17:50:16.219702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:52.940 [2024-11-20 17:50:16.219725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.696 ms 00:19:52.940 [2024-11-20 17:50:16.219733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.220161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.940 [2024-11-20 17:50:16.220174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:52.940 [2024-11-20 17:50:16.220186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:19:52.940 [2024-11-20 17:50:16.220194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.258525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.258683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.940 [2024-11-20 17:50:16.258711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.258719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.258792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.258801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.940 [2024-11-20 17:50:16.258811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.258819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.258948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.258960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.940 [2024-11-20 17:50:16.258971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.258978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.258997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.259005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.940 [2024-11-20 17:50:16.259015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.259023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.343313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.343372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.940 [2024-11-20 17:50:16.343392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.343400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.411218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.411271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.940 [2024-11-20 17:50:16.411286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.411295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.411382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.411392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.940 [2024-11-20 17:50:16.411403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.411412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.411483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.411494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.940 [2024-11-20 17:50:16.411505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.411513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.411612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.411624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.940 [2024-11-20 17:50:16.411638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.411646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.411681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.411691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:52.940 [2024-11-20 17:50:16.411701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.411709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.411749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.411761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.940 [2024-11-20 17:50:16.411772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.411780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.411828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.940 [2024-11-20 17:50:16.411846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.940 [2024-11-20 17:50:16.411856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.940 [2024-11-20 17:50:16.411865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.940 [2024-11-20 17:50:16.412057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 583.877 ms, result 0 00:19:52.940 true 00:19:52.940 17:50:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76155 00:19:52.940 17:50:16 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76155 ']' 00:19:52.940 17:50:16 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76155 00:19:52.940 17:50:16 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:52.940 17:50:16 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.940 17:50:16 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76155 00:19:52.940 17:50:16 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.940 17:50:16 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.940 17:50:16 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76155' 00:19:52.940 killing process with pid 76155 00:19:52.940 17:50:16 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76155 00:19:52.940 Received shutdown signal, test time was about 4.000000 seconds 00:19:52.940 00:19:52.940 Latency(us) 00:19:52.940 [2024-11-20T17:50:16.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.940 [2024-11-20T17:50:16.480Z] =================================================================================================================== 00:19:52.940 [2024-11-20T17:50:16.480Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.940 17:50:16 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76155 00:19:53.884 17:50:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:53.884 Remove shared memory files 00:19:53.885 17:50:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:53.885 17:50:17 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:53.885 17:50:17 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:53.885 17:50:17 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:53.885 17:50:17 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:53.885 17:50:17 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:53.885 17:50:17 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:53.885 ************************************ 00:19:53.885 END TEST ftl_bdevperf 00:19:53.885 ************************************ 00:19:53.885 00:19:53.885 real 0m21.996s 00:19:53.885 user 0m24.711s 00:19:53.885 sys 0m0.959s 00:19:53.885 17:50:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.885 17:50:17 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:53.885 17:50:17 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:53.885 17:50:17 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:53.885 17:50:17 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.885 17:50:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:53.885 ************************************ 00:19:53.885 START TEST ftl_trim 00:19:53.885 ************************************ 00:19:53.885 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:54.147 * Looking for test storage... 00:19:54.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.147 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:54.147 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:54.147 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:54.147 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.147 17:50:17 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:54.147 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.147 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:54.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.147 --rc genhtml_branch_coverage=1 00:19:54.147 --rc genhtml_function_coverage=1 00:19:54.147 --rc genhtml_legend=1 00:19:54.147 --rc geninfo_all_blocks=1 00:19:54.147 --rc geninfo_unexecuted_blocks=1 00:19:54.147 00:19:54.147 ' 00:19:54.147 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:54.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.147 --rc genhtml_branch_coverage=1 00:19:54.147 --rc genhtml_function_coverage=1 00:19:54.147 --rc genhtml_legend=1 00:19:54.147 --rc geninfo_all_blocks=1 00:19:54.147 --rc geninfo_unexecuted_blocks=1 00:19:54.147 00:19:54.147 ' 00:19:54.148 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:54.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.148 --rc genhtml_branch_coverage=1 00:19:54.148 --rc genhtml_function_coverage=1 00:19:54.148 --rc genhtml_legend=1 00:19:54.148 --rc geninfo_all_blocks=1 00:19:54.148 --rc geninfo_unexecuted_blocks=1 00:19:54.148 00:19:54.148 ' 00:19:54.148 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:54.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.148 --rc genhtml_branch_coverage=1 00:19:54.148 --rc genhtml_function_coverage=1 00:19:54.148 --rc genhtml_legend=1 00:19:54.148 --rc geninfo_all_blocks=1 00:19:54.148 --rc geninfo_unexecuted_blocks=1 00:19:54.148 00:19:54.148 ' 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76502 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:54.148 17:50:17 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76502 00:19:54.148 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76502 ']' 00:19:54.148 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.148 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.148 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.148 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.148 17:50:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:54.148 [2024-11-20 17:50:17.635586] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:19:54.148 [2024-11-20 17:50:17.635989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76502 ] 00:19:54.409 [2024-11-20 17:50:17.802310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:54.409 [2024-11-20 17:50:17.932559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.409 [2024-11-20 17:50:17.932820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.409 [2024-11-20 17:50:17.932827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.353 17:50:18 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.353 17:50:18 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:55.353 17:50:18 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:55.353 17:50:18 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:55.353 17:50:18 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:55.353 17:50:18 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:55.353 17:50:18 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:55.353 17:50:18 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:55.615 17:50:18 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:55.615 17:50:18 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:55.615 17:50:18 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:55.615 17:50:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:55.615 17:50:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:55.615 17:50:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:55.615 17:50:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:55.615 17:50:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:55.875 17:50:19 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:55.875 { 00:19:55.875 "name": "nvme0n1", 00:19:55.875 "aliases": [ 00:19:55.875 "c0086270-0113-4e1c-99bd-744b4defdb45" 00:19:55.875 ], 00:19:55.875 "product_name": "NVMe disk", 00:19:55.875 "block_size": 4096, 00:19:55.875 "num_blocks": 1310720, 00:19:55.875 "uuid": "c0086270-0113-4e1c-99bd-744b4defdb45", 00:19:55.875 "numa_id": -1, 00:19:55.875 "assigned_rate_limits": { 00:19:55.875 "rw_ios_per_sec": 0, 00:19:55.875 "rw_mbytes_per_sec": 0, 00:19:55.875 "r_mbytes_per_sec": 0, 00:19:55.875 "w_mbytes_per_sec": 0 00:19:55.875 }, 00:19:55.875 "claimed": true, 00:19:55.875 "claim_type": "read_many_write_one", 00:19:55.875 "zoned": false, 00:19:55.875 "supported_io_types": { 00:19:55.875 "read": true, 00:19:55.875 "write": true, 00:19:55.875 "unmap": true, 00:19:55.875 "flush": true, 00:19:55.875 "reset": true, 00:19:55.875 "nvme_admin": true, 00:19:55.875 "nvme_io": true, 00:19:55.875 "nvme_io_md": false, 00:19:55.875 "write_zeroes": true, 00:19:55.875 "zcopy": false, 00:19:55.875 "get_zone_info": false, 00:19:55.875 "zone_management": false, 00:19:55.875 "zone_append": false, 00:19:55.875 "compare": true, 00:19:55.875 "compare_and_write": false, 00:19:55.875 "abort": true, 00:19:55.875 "seek_hole": false, 00:19:55.875 "seek_data": false, 00:19:55.875 "copy": true, 00:19:55.875 "nvme_iov_md": false 00:19:55.875 }, 00:19:55.875 "driver_specific": { 00:19:55.875 "nvme": [ 00:19:55.875 { 00:19:55.875 "pci_address": "0000:00:11.0", 00:19:55.875 "trid": { 00:19:55.875 "trtype": "PCIe", 00:19:55.875 "traddr": "0000:00:11.0" 00:19:55.875 }, 00:19:55.875 "ctrlr_data": { 00:19:55.875 "cntlid": 0, 00:19:55.875 "vendor_id": "0x1b36", 00:19:55.875 "model_number": "QEMU NVMe Ctrl", 00:19:55.875 "serial_number": "12341", 00:19:55.875 "firmware_revision": "8.0.0", 00:19:55.875 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:55.875 "oacs": { 00:19:55.875 "security": 0, 00:19:55.875 "format": 1, 00:19:55.875 "firmware": 0, 00:19:55.875 "ns_manage": 1 00:19:55.875 }, 00:19:55.875 "multi_ctrlr": false, 00:19:55.875 "ana_reporting": false 00:19:55.875 }, 00:19:55.875 "vs": { 00:19:55.875 "nvme_version": "1.4" 00:19:55.875 }, 00:19:55.875 "ns_data": { 00:19:55.875 "id": 1, 00:19:55.875 "can_share": false 00:19:55.875 } 00:19:55.875 } 00:19:55.875 ], 00:19:55.875 "mp_policy": "active_passive" 00:19:55.875 } 00:19:55.875 } 00:19:55.875 ]' 00:19:55.875 17:50:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:55.875 17:50:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:55.875 17:50:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:55.875 17:50:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:55.875 17:50:19 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:55.875 17:50:19 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:55.875 17:50:19 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:55.875 17:50:19 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:55.875 17:50:19 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:55.875 17:50:19 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:55.875 17:50:19 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:56.135 17:50:19 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=eeed38ef-1854-42ae-9164-4fe0a8018938 00:19:56.135 17:50:19 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:56.135 17:50:19 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eeed38ef-1854-42ae-9164-4fe0a8018938 00:19:56.397 17:50:19 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:56.658 17:50:19 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=5629153b-9207-409d-b3eb-e66884e6e2fe 00:19:56.658 17:50:19 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5629153b-9207-409d-b3eb-e66884e6e2fe 00:19:56.658 17:50:20 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:56.658 17:50:20 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:56.658 17:50:20 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:56.658 17:50:20 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:56.658 17:50:20 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:56.658 17:50:20 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:56.658 17:50:20 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:56.658 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:56.658 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:56.658 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:56.658 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:56.658 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:56.920 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:56.920 { 00:19:56.920 "name": "aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef", 00:19:56.920 "aliases": [ 00:19:56.920 "lvs/nvme0n1p0" 00:19:56.920 ], 00:19:56.920 "product_name": "Logical Volume", 00:19:56.920 "block_size": 4096, 00:19:56.920 "num_blocks": 26476544, 00:19:56.920 "uuid": "aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef", 00:19:56.920 "assigned_rate_limits": { 00:19:56.920 "rw_ios_per_sec": 0, 00:19:56.920 "rw_mbytes_per_sec": 0, 00:19:56.920 "r_mbytes_per_sec": 0, 00:19:56.920 "w_mbytes_per_sec": 0 00:19:56.920 }, 00:19:56.920 "claimed": false, 00:19:56.920 "zoned": false, 00:19:56.920 "supported_io_types": { 00:19:56.920 "read": true, 00:19:56.920 "write": true, 00:19:56.920 "unmap": true, 00:19:56.920 "flush": false, 00:19:56.920 "reset": true, 00:19:56.920 "nvme_admin": false, 00:19:56.920 "nvme_io": false, 00:19:56.920 "nvme_io_md": false, 00:19:56.920 "write_zeroes": true, 00:19:56.921 "zcopy": false, 00:19:56.921 "get_zone_info": false, 00:19:56.921 "zone_management": false, 00:19:56.921 "zone_append": false, 00:19:56.921 "compare": false, 00:19:56.921 "compare_and_write": false, 00:19:56.921 "abort": false, 00:19:56.921 "seek_hole": true, 00:19:56.921 "seek_data": true, 00:19:56.921 "copy": false, 00:19:56.921 "nvme_iov_md": false 00:19:56.921 }, 00:19:56.921 "driver_specific": { 00:19:56.921 "lvol": { 00:19:56.921 "lvol_store_uuid": "5629153b-9207-409d-b3eb-e66884e6e2fe", 00:19:56.921 "base_bdev": "nvme0n1", 00:19:56.921 "thin_provision": true, 00:19:56.921 "num_allocated_clusters": 0, 00:19:56.921 "snapshot": false, 00:19:56.921 "clone": false, 00:19:56.921 "esnap_clone": false 00:19:56.921 } 00:19:56.921 } 00:19:56.921 } 00:19:56.921 ]' 00:19:56.921 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:56.921 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:56.921 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:57.182 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:57.182 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:57.182 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:57.182 17:50:20 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:57.182 17:50:20 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:57.182 17:50:20 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:57.444 17:50:20 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:57.444 17:50:20 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:57.444 17:50:20 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:57.444 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:57.444 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:57.444 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:57.444 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:57.444 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:57.444 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:57.444 { 00:19:57.444 "name": "aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef", 00:19:57.444 "aliases": [ 00:19:57.444 "lvs/nvme0n1p0" 00:19:57.444 ], 00:19:57.444 "product_name": "Logical Volume", 00:19:57.444 "block_size": 4096, 00:19:57.444 "num_blocks": 26476544, 00:19:57.444 "uuid": "aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef", 00:19:57.444 "assigned_rate_limits": { 00:19:57.444 "rw_ios_per_sec": 0, 00:19:57.444 "rw_mbytes_per_sec": 0, 00:19:57.444 "r_mbytes_per_sec": 0, 00:19:57.444 "w_mbytes_per_sec": 0 00:19:57.444 }, 00:19:57.444 "claimed": false, 00:19:57.444 "zoned": false, 00:19:57.444 "supported_io_types": { 00:19:57.444 "read": true, 00:19:57.444 "write": true, 00:19:57.444 "unmap": true, 00:19:57.444 "flush": false, 00:19:57.444 "reset": true, 00:19:57.444 "nvme_admin": false, 00:19:57.444 "nvme_io": false, 00:19:57.444 "nvme_io_md": false, 00:19:57.444 "write_zeroes": true, 00:19:57.444 "zcopy": false, 00:19:57.444 "get_zone_info": false, 00:19:57.444 "zone_management": false, 00:19:57.444 "zone_append": false, 00:19:57.444 "compare": false, 00:19:57.444 "compare_and_write": false, 00:19:57.444 "abort": false, 00:19:57.444 "seek_hole": true, 00:19:57.444 "seek_data": true, 00:19:57.444 "copy": false, 00:19:57.444 "nvme_iov_md": false 00:19:57.444 }, 00:19:57.444 "driver_specific": { 00:19:57.444 "lvol": { 00:19:57.444 "lvol_store_uuid": "5629153b-9207-409d-b3eb-e66884e6e2fe", 00:19:57.444 "base_bdev": "nvme0n1", 00:19:57.444 "thin_provision": true, 00:19:57.444 "num_allocated_clusters": 0, 00:19:57.444 "snapshot": false, 00:19:57.444 "clone": false, 00:19:57.444 "esnap_clone": false 00:19:57.444 } 00:19:57.444 } 00:19:57.444 } 00:19:57.444 ]' 00:19:57.444 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:57.706 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:57.706 17:50:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:57.706 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:57.706 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:57.706 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:57.706 17:50:21 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:57.706 17:50:21 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:57.706 17:50:21 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:57.706 17:50:21 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:57.706 17:50:21 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:57.706 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:57.706 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:57.706 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:57.706 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:57.706 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef 00:19:57.968 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:57.968 { 00:19:57.968 "name": "aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef", 00:19:57.968 "aliases": [ 00:19:57.968 "lvs/nvme0n1p0" 00:19:57.968 ], 00:19:57.968 "product_name": "Logical Volume", 00:19:57.968 "block_size": 4096, 00:19:57.968 "num_blocks": 26476544, 00:19:57.968 "uuid": "aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef", 00:19:57.968 "assigned_rate_limits": { 00:19:57.968 "rw_ios_per_sec": 0, 00:19:57.968 "rw_mbytes_per_sec": 0, 00:19:57.968 "r_mbytes_per_sec": 0, 00:19:57.968 "w_mbytes_per_sec": 0 00:19:57.968 }, 00:19:57.968 "claimed": false, 00:19:57.968 "zoned": false, 00:19:57.968 "supported_io_types": { 00:19:57.968 "read": true, 00:19:57.968 "write": true, 00:19:57.968 "unmap": true, 00:19:57.968 "flush": false, 00:19:57.968 "reset": true, 00:19:57.968 "nvme_admin": false, 00:19:57.968 "nvme_io": false, 00:19:57.968 "nvme_io_md": false, 00:19:57.968 "write_zeroes": true, 00:19:57.968 "zcopy": false, 00:19:57.968 "get_zone_info": false, 00:19:57.968 "zone_management": false, 00:19:57.968 "zone_append": false, 00:19:57.968 "compare": false, 00:19:57.968 "compare_and_write": false, 00:19:57.968 "abort": false, 00:19:57.968 "seek_hole": true, 00:19:57.968 "seek_data": true, 00:19:57.968 "copy": false, 00:19:57.968 "nvme_iov_md": false 00:19:57.968 }, 00:19:57.968 "driver_specific": { 00:19:57.968 "lvol": { 00:19:57.968 "lvol_store_uuid": "5629153b-9207-409d-b3eb-e66884e6e2fe", 00:19:57.968 "base_bdev": "nvme0n1", 00:19:57.968 "thin_provision": true, 00:19:57.968 "num_allocated_clusters": 0, 00:19:57.968 "snapshot": false, 00:19:57.968 "clone": false, 00:19:57.968 "esnap_clone": false 00:19:57.968 } 00:19:57.968 } 00:19:57.968 } 00:19:57.968 ]' 00:19:57.968 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:57.968 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:57.968 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:57.968 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:57.968 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:57.968 17:50:21 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:57.968 17:50:21 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:57.968 17:50:21 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:58.231 [2024-11-20 17:50:21.665385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.231 [2024-11-20 17:50:21.665428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:58.231 [2024-11-20 17:50:21.665445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:58.231 [2024-11-20 17:50:21.665453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.231 [2024-11-20 17:50:21.668490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.231 [2024-11-20 17:50:21.668530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.231 [2024-11-20 17:50:21.668543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:19:58.231 [2024-11-20 17:50:21.668551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.231 [2024-11-20 17:50:21.668721] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:58.231 [2024-11-20 17:50:21.669425] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:58.231 [2024-11-20 17:50:21.669550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.231 [2024-11-20 17:50:21.669561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.231 [2024-11-20 17:50:21.669571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:19:58.231 [2024-11-20 17:50:21.669579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.231 [2024-11-20 17:50:21.669679] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e06a0caf-34b4-47ef-af8c-ec0a4fae16c0 00:19:58.231 [2024-11-20 17:50:21.670685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.231 [2024-11-20 17:50:21.670710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:58.231 [2024-11-20 17:50:21.670719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:58.231 [2024-11-20 17:50:21.670728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.231 [2024-11-20 17:50:21.675560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.231 [2024-11-20 17:50:21.675589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.231 [2024-11-20 17:50:21.675600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.764 ms 00:19:58.231 [2024-11-20 17:50:21.675611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.231 [2024-11-20 17:50:21.675719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.231 [2024-11-20 17:50:21.675732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.231 [2024-11-20 17:50:21.675739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:58.231 [2024-11-20 17:50:21.675751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.231 [2024-11-20 17:50:21.675784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.231 [2024-11-20 17:50:21.675794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:58.231 [2024-11-20 17:50:21.675802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:58.231 [2024-11-20 17:50:21.675812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.231 [2024-11-20 17:50:21.675840] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:58.231 [2024-11-20 17:50:21.679316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.231 [2024-11-20 17:50:21.679344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.231 [2024-11-20 17:50:21.679356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.478 ms 00:19:58.231 [2024-11-20 17:50:21.679364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.231 [2024-11-20 17:50:21.679415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.231 [2024-11-20 17:50:21.679424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:58.231 [2024-11-20 17:50:21.679433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:58.231 [2024-11-20 17:50:21.679452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.231 [2024-11-20 17:50:21.679480] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:58.231 [2024-11-20 17:50:21.679611] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:58.231 [2024-11-20 17:50:21.679626] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:58.231 [2024-11-20 17:50:21.679636] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:58.231 [2024-11-20 17:50:21.679647] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:58.231 [2024-11-20 17:50:21.679656] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:58.231 [2024-11-20 17:50:21.679665] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:58.231 [2024-11-20 17:50:21.679672] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:58.231 [2024-11-20 17:50:21.679680] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:58.231 [2024-11-20 17:50:21.679689] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:58.231 [2024-11-20 17:50:21.679698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.231 [2024-11-20 17:50:21.679704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:58.231 [2024-11-20 17:50:21.679713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:19:58.231 [2024-11-20 17:50:21.679720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.231 [2024-11-20 17:50:21.679836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.231 [2024-11-20 17:50:21.679845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:58.231 [2024-11-20 17:50:21.679855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:58.231 [2024-11-20 17:50:21.679862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.231 [2024-11-20 17:50:21.679991] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:58.231 [2024-11-20 17:50:21.680001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:58.231 [2024-11-20 17:50:21.680010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.231 [2024-11-20 17:50:21.680017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.231 [2024-11-20 17:50:21.680026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:58.231 [2024-11-20 17:50:21.680033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:58.231 [2024-11-20 17:50:21.680041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:58.231 [2024-11-20 17:50:21.680047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:58.231 [2024-11-20 17:50:21.680055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:58.231 [2024-11-20 17:50:21.680062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.231 [2024-11-20 17:50:21.680070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:58.231 [2024-11-20 17:50:21.680077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:58.231 [2024-11-20 17:50:21.680085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.231 [2024-11-20 17:50:21.680091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:58.231 [2024-11-20 17:50:21.680099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:58.231 [2024-11-20 17:50:21.680105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.231 [2024-11-20 17:50:21.680115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:58.231 [2024-11-20 17:50:21.680122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:58.231 [2024-11-20 17:50:21.680129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.231 [2024-11-20 17:50:21.680136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:58.232 [2024-11-20 17:50:21.680146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:58.232 [2024-11-20 17:50:21.680152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.232 [2024-11-20 17:50:21.680160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:58.232 [2024-11-20 17:50:21.680167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:58.232 [2024-11-20 17:50:21.680174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.232 [2024-11-20 17:50:21.680181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:58.232 [2024-11-20 17:50:21.680188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:58.232 [2024-11-20 17:50:21.680195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.232 [2024-11-20 17:50:21.680203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:58.232 [2024-11-20 17:50:21.680209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:58.232 [2024-11-20 17:50:21.680216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.232 [2024-11-20 17:50:21.680223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:58.232 [2024-11-20 17:50:21.680232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:58.232 [2024-11-20 17:50:21.680239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.232 [2024-11-20 17:50:21.680247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:58.232 [2024-11-20 17:50:21.680257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:58.232 [2024-11-20 17:50:21.680265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.232 [2024-11-20 17:50:21.680272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:58.232 [2024-11-20 17:50:21.680280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:58.232 [2024-11-20 17:50:21.680286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.232 [2024-11-20 17:50:21.680294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:58.232 [2024-11-20 17:50:21.680301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:58.232 [2024-11-20 17:50:21.680308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.232 [2024-11-20 17:50:21.680314] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:58.232 [2024-11-20 17:50:21.680323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:58.232 [2024-11-20 17:50:21.680330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.232 [2024-11-20 17:50:21.680338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.232 [2024-11-20 17:50:21.680345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:58.232 [2024-11-20 17:50:21.680356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:58.232 [2024-11-20 17:50:21.680363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:58.232 [2024-11-20 17:50:21.680371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:58.232 [2024-11-20 17:50:21.680377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:58.232 [2024-11-20 17:50:21.680385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:58.232 [2024-11-20 17:50:21.680395] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:58.232 [2024-11-20 17:50:21.680405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.232 [2024-11-20 17:50:21.680415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:58.232 [2024-11-20 17:50:21.680424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:58.232 [2024-11-20 17:50:21.680431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:58.232 [2024-11-20 17:50:21.680439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:58.232 [2024-11-20 17:50:21.680446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:58.232 [2024-11-20 17:50:21.680454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:58.232 [2024-11-20 17:50:21.680461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:58.232 [2024-11-20 17:50:21.680469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:58.232 [2024-11-20 17:50:21.680476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:58.232 [2024-11-20 17:50:21.680486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:58.232 [2024-11-20 17:50:21.680493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:58.232 [2024-11-20 17:50:21.680501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:58.232 [2024-11-20 17:50:21.680509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:58.232 [2024-11-20 17:50:21.680518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:58.232 [2024-11-20 17:50:21.680525] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:58.232 [2024-11-20 17:50:21.680538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.232 [2024-11-20 17:50:21.680545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:58.232 [2024-11-20 17:50:21.680554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:58.232 [2024-11-20 17:50:21.680561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:58.232 [2024-11-20 17:50:21.680570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:58.232 [2024-11-20 17:50:21.680577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.232 [2024-11-20 17:50:21.680586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:58.232 [2024-11-20 17:50:21.680593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:19:58.232 [2024-11-20 17:50:21.680601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.232 [2024-11-20 17:50:21.680667] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:58.232 [2024-11-20 17:50:21.680679] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:00.782 [2024-11-20 17:50:24.108290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.108478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:00.782 [2024-11-20 17:50:24.108544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2427.615 ms 00:20:00.782 [2024-11-20 17:50:24.108572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.782 [2024-11-20 17:50:24.133410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.133552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.782 [2024-11-20 17:50:24.133608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.573 ms 00:20:00.782 [2024-11-20 17:50:24.133635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.782 [2024-11-20 17:50:24.133774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.133801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:00.782 [2024-11-20 17:50:24.133906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:00.782 [2024-11-20 17:50:24.133943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.782 [2024-11-20 17:50:24.176509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.176651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.782 [2024-11-20 17:50:24.176717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.387 ms 00:20:00.782 [2024-11-20 17:50:24.176745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.782 [2024-11-20 17:50:24.176829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.176965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.782 [2024-11-20 17:50:24.176993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:00.782 [2024-11-20 17:50:24.177014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.782 [2024-11-20 17:50:24.177324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.177365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.782 [2024-11-20 17:50:24.177386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:00.782 [2024-11-20 17:50:24.177406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.782 [2024-11-20 17:50:24.177578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.177685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.782 [2024-11-20 17:50:24.177734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:00.782 [2024-11-20 17:50:24.177760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.782 [2024-11-20 17:50:24.191813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.191951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.782 [2024-11-20 17:50:24.192009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.996 ms 00:20:00.782 [2024-11-20 17:50:24.192034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.782 [2024-11-20 17:50:24.203263] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:00.782 [2024-11-20 17:50:24.217126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.217159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:00.782 [2024-11-20 17:50:24.217171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.961 ms 00:20:00.782 [2024-11-20 17:50:24.217179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.782 [2024-11-20 17:50:24.281983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.282121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:00.782 [2024-11-20 17:50:24.282143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.720 ms 00:20:00.782 [2024-11-20 17:50:24.282152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.782 [2024-11-20 17:50:24.282350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.282361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:00.782 [2024-11-20 17:50:24.282374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:20:00.782 [2024-11-20 17:50:24.282381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.782 [2024-11-20 17:50:24.305651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.782 [2024-11-20 17:50:24.305761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:00.782 [2024-11-20 17:50:24.305781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.237 ms 00:20:00.782 [2024-11-20 17:50:24.305789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.043 [2024-11-20 17:50:24.328442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.043 [2024-11-20 17:50:24.328542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:01.043 [2024-11-20 17:50:24.328561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.609 ms 00:20:01.043 [2024-11-20 17:50:24.328569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.043 [2024-11-20 17:50:24.329176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.043 [2024-11-20 17:50:24.329194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:01.043 [2024-11-20 17:50:24.329204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:20:01.043 [2024-11-20 17:50:24.329211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.043 [2024-11-20 17:50:24.399555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.043 [2024-11-20 17:50:24.399666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:01.043 [2024-11-20 17:50:24.399686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.316 ms 00:20:01.043 [2024-11-20 17:50:24.399694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.043 [2024-11-20 17:50:24.423339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.043 [2024-11-20 17:50:24.423370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:01.043 [2024-11-20 17:50:24.423383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.575 ms 00:20:01.043 [2024-11-20 17:50:24.423391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.043 [2024-11-20 17:50:24.447252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.043 [2024-11-20 17:50:24.447282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:01.043 [2024-11-20 17:50:24.447293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.781 ms 00:20:01.043 [2024-11-20 17:50:24.447301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.044 [2024-11-20 17:50:24.471174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.044 [2024-11-20 17:50:24.471203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:01.044 [2024-11-20 17:50:24.471216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.819 ms 00:20:01.044 [2024-11-20 17:50:24.471234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.044 [2024-11-20 17:50:24.471291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.044 [2024-11-20 17:50:24.471302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:01.044 [2024-11-20 17:50:24.471314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:01.044 [2024-11-20 17:50:24.471321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.044 [2024-11-20 17:50:24.471390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.044 [2024-11-20 17:50:24.471399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:01.044 [2024-11-20 17:50:24.471408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:01.044 [2024-11-20 17:50:24.471415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.044 [2024-11-20 17:50:24.472167] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.044 [2024-11-20 17:50:24.475216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2806.478 ms, result 0 00:20:01.044 [2024-11-20 17:50:24.476049] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:01.044 { 00:20:01.044 "name": "ftl0", 00:20:01.044 "uuid": "e06a0caf-34b4-47ef-af8c-ec0a4fae16c0" 00:20:01.044 } 00:20:01.044 17:50:24 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:01.044 17:50:24 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:01.044 17:50:24 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:01.044 17:50:24 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:01.044 17:50:24 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:01.044 17:50:24 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:01.044 17:50:24 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:01.305 17:50:24 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:01.566 [ 00:20:01.566 { 00:20:01.566 "name": "ftl0", 00:20:01.566 "aliases": [ 00:20:01.566 "e06a0caf-34b4-47ef-af8c-ec0a4fae16c0" 00:20:01.566 ], 00:20:01.566 "product_name": "FTL disk", 00:20:01.566 "block_size": 4096, 00:20:01.566 "num_blocks": 23592960, 00:20:01.566 "uuid": "e06a0caf-34b4-47ef-af8c-ec0a4fae16c0", 00:20:01.566 "assigned_rate_limits": { 00:20:01.566 "rw_ios_per_sec": 0, 00:20:01.566 "rw_mbytes_per_sec": 0, 00:20:01.566 "r_mbytes_per_sec": 0, 00:20:01.566 "w_mbytes_per_sec": 0 00:20:01.566 }, 00:20:01.566 "claimed": false, 00:20:01.566 "zoned": false, 00:20:01.566 "supported_io_types": { 00:20:01.566 "read": true, 00:20:01.566 "write": true, 00:20:01.566 "unmap": true, 00:20:01.566 "flush": true, 00:20:01.566 "reset": false, 00:20:01.566 "nvme_admin": false, 00:20:01.566 "nvme_io": false, 00:20:01.566 "nvme_io_md": false, 00:20:01.566 "write_zeroes": true, 00:20:01.566 "zcopy": false, 00:20:01.566 "get_zone_info": false, 00:20:01.566 "zone_management": false, 00:20:01.566 "zone_append": false, 00:20:01.566 "compare": false, 00:20:01.566 "compare_and_write": false, 00:20:01.566 "abort": false, 00:20:01.566 "seek_hole": false, 00:20:01.566 "seek_data": false, 00:20:01.567 "copy": false, 00:20:01.567 "nvme_iov_md": false 00:20:01.567 }, 00:20:01.567 "driver_specific": { 00:20:01.567 "ftl": { 00:20:01.567 "base_bdev": "aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef", 00:20:01.567 "cache": "nvc0n1p0" 00:20:01.567 } 00:20:01.567 } 00:20:01.567 } 00:20:01.567 ] 00:20:01.567 17:50:24 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:01.567 17:50:24 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:01.567 17:50:24 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:01.567 17:50:25 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:01.567 17:50:25 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:01.827 17:50:25 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:01.827 { 00:20:01.827 "name": "ftl0", 00:20:01.827 "aliases": [ 00:20:01.827 "e06a0caf-34b4-47ef-af8c-ec0a4fae16c0" 00:20:01.827 ], 00:20:01.827 "product_name": "FTL disk", 00:20:01.827 "block_size": 4096, 00:20:01.827 "num_blocks": 23592960, 00:20:01.827 "uuid": "e06a0caf-34b4-47ef-af8c-ec0a4fae16c0", 00:20:01.827 "assigned_rate_limits": { 00:20:01.827 "rw_ios_per_sec": 0, 00:20:01.827 "rw_mbytes_per_sec": 0, 00:20:01.827 "r_mbytes_per_sec": 0, 00:20:01.827 "w_mbytes_per_sec": 0 00:20:01.827 }, 00:20:01.827 "claimed": false, 00:20:01.827 "zoned": false, 00:20:01.827 "supported_io_types": { 00:20:01.827 "read": true, 00:20:01.827 "write": true, 00:20:01.827 "unmap": true, 00:20:01.827 "flush": true, 00:20:01.827 "reset": false, 00:20:01.827 "nvme_admin": false, 00:20:01.827 "nvme_io": false, 00:20:01.827 "nvme_io_md": false, 00:20:01.827 "write_zeroes": true, 00:20:01.827 "zcopy": false, 00:20:01.827 "get_zone_info": false, 00:20:01.827 "zone_management": false, 00:20:01.827 "zone_append": false, 00:20:01.827 "compare": false, 00:20:01.827 "compare_and_write": false, 00:20:01.827 "abort": false, 00:20:01.827 "seek_hole": false, 00:20:01.827 "seek_data": false, 00:20:01.827 "copy": false, 00:20:01.827 "nvme_iov_md": false 00:20:01.827 }, 00:20:01.827 "driver_specific": { 00:20:01.827 "ftl": { 00:20:01.827 "base_bdev": "aa01a5c0-67bd-4d96-a8a4-2bb40ae4d4ef", 00:20:01.827 "cache": "nvc0n1p0" 00:20:01.827 } 00:20:01.827 } 00:20:01.827 } 00:20:01.827 ]' 00:20:01.827 17:50:25 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:01.827 17:50:25 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:01.827 17:50:25 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:02.087 [2024-11-20 17:50:25.583500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.087 [2024-11-20 17:50:25.583544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:02.087 [2024-11-20 17:50:25.583560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:02.087 [2024-11-20 17:50:25.583572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.087 [2024-11-20 17:50:25.583603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:02.087 [2024-11-20 17:50:25.586182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.087 [2024-11-20 17:50:25.586317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:02.087 [2024-11-20 17:50:25.586343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:20:02.087 [2024-11-20 17:50:25.586351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.087 [2024-11-20 17:50:25.586818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.087 [2024-11-20 17:50:25.586828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:02.087 [2024-11-20 17:50:25.586838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:20:02.087 [2024-11-20 17:50:25.586846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.087 [2024-11-20 17:50:25.590495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.087 [2024-11-20 17:50:25.590517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:02.087 [2024-11-20 17:50:25.590527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.613 ms 00:20:02.087 [2024-11-20 17:50:25.590534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.087 [2024-11-20 17:50:25.597550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.087 [2024-11-20 17:50:25.597658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:02.087 [2024-11-20 17:50:25.597675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.970 ms 00:20:02.087 [2024-11-20 17:50:25.597683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.087 [2024-11-20 17:50:25.621235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.087 [2024-11-20 17:50:25.621265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:02.087 [2024-11-20 17:50:25.621281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.469 ms 00:20:02.087 [2024-11-20 17:50:25.621288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.347 [2024-11-20 17:50:25.636085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.347 [2024-11-20 17:50:25.636201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:02.347 [2024-11-20 17:50:25.636221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.739 ms 00:20:02.347 [2024-11-20 17:50:25.636231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.347 [2024-11-20 17:50:25.636408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.347 [2024-11-20 17:50:25.636419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:02.347 [2024-11-20 17:50:25.636429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:20:02.347 [2024-11-20 17:50:25.636436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.347 [2024-11-20 17:50:25.659225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.347 [2024-11-20 17:50:25.659335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:02.347 [2024-11-20 17:50:25.659352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.756 ms 00:20:02.347 [2024-11-20 17:50:25.659359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.347 [2024-11-20 17:50:25.681676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.348 [2024-11-20 17:50:25.681705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:02.348 [2024-11-20 17:50:25.681737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.267 ms 00:20:02.348 [2024-11-20 17:50:25.681744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.348 [2024-11-20 17:50:25.703932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.348 [2024-11-20 17:50:25.703960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:02.348 [2024-11-20 17:50:25.703971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.132 ms 00:20:02.348 [2024-11-20 17:50:25.703978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.348 [2024-11-20 17:50:25.726373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.348 [2024-11-20 17:50:25.726400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:02.348 [2024-11-20 17:50:25.726411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.297 ms 00:20:02.348 [2024-11-20 17:50:25.726418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.348 [2024-11-20 17:50:25.726492] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:02.348 [2024-11-20 17:50:25.726507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.726997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:02.348 [2024-11-20 17:50:25.727094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:02.349 [2024-11-20 17:50:25.727381] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:02.349 [2024-11-20 17:50:25.727391] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e06a0caf-34b4-47ef-af8c-ec0a4fae16c0 00:20:02.349 [2024-11-20 17:50:25.727399] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:02.349 [2024-11-20 17:50:25.727407] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:02.349 [2024-11-20 17:50:25.727414] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:02.349 [2024-11-20 17:50:25.727424] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:02.349 [2024-11-20 17:50:25.727431] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:02.349 [2024-11-20 17:50:25.727440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:02.349 [2024-11-20 17:50:25.727447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:02.349 [2024-11-20 17:50:25.727454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:02.349 [2024-11-20 17:50:25.727460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:02.349 [2024-11-20 17:50:25.727469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.349 [2024-11-20 17:50:25.727476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:02.349 [2024-11-20 17:50:25.727486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:20:02.349 [2024-11-20 17:50:25.727493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.349 [2024-11-20 17:50:25.739741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.349 [2024-11-20 17:50:25.739769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:02.349 [2024-11-20 17:50:25.739781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.210 ms 00:20:02.349 [2024-11-20 17:50:25.739789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.349 [2024-11-20 17:50:25.740174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.349 [2024-11-20 17:50:25.740190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:02.349 [2024-11-20 17:50:25.740199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:20:02.349 [2024-11-20 17:50:25.740206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.349 [2024-11-20 17:50:25.783416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.349 [2024-11-20 17:50:25.783446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.349 [2024-11-20 17:50:25.783457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.349 [2024-11-20 17:50:25.783465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.349 [2024-11-20 17:50:25.783551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.349 [2024-11-20 17:50:25.783560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.349 [2024-11-20 17:50:25.783569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.349 [2024-11-20 17:50:25.783576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.349 [2024-11-20 17:50:25.783630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.349 [2024-11-20 17:50:25.783639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.349 [2024-11-20 17:50:25.783652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.349 [2024-11-20 17:50:25.783659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.349 [2024-11-20 17:50:25.783687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.349 [2024-11-20 17:50:25.783695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.349 [2024-11-20 17:50:25.783703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.349 [2024-11-20 17:50:25.783710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.349 [2024-11-20 17:50:25.864415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.349 [2024-11-20 17:50:25.864453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.349 [2024-11-20 17:50:25.864465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.349 [2024-11-20 17:50:25.864473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.608 [2024-11-20 17:50:25.927260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.608 [2024-11-20 17:50:25.927296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:02.608 [2024-11-20 17:50:25.927308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.608 [2024-11-20 17:50:25.927315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.608 [2024-11-20 17:50:25.927380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.608 [2024-11-20 17:50:25.927389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.608 [2024-11-20 17:50:25.927412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.608 [2024-11-20 17:50:25.927422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.608 [2024-11-20 17:50:25.927475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.608 [2024-11-20 17:50:25.927483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.609 [2024-11-20 17:50:25.927492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.609 [2024-11-20 17:50:25.927498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.609 [2024-11-20 17:50:25.927606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.609 [2024-11-20 17:50:25.927616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.609 [2024-11-20 17:50:25.927625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.609 [2024-11-20 17:50:25.927634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.609 [2024-11-20 17:50:25.927678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.609 [2024-11-20 17:50:25.927686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:02.609 [2024-11-20 17:50:25.927695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.609 [2024-11-20 17:50:25.927702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.609 [2024-11-20 17:50:25.927746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.609 [2024-11-20 17:50:25.927754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.609 [2024-11-20 17:50:25.927764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.609 [2024-11-20 17:50:25.927771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.609 [2024-11-20 17:50:25.927821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.609 [2024-11-20 17:50:25.927830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.609 [2024-11-20 17:50:25.927839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.609 [2024-11-20 17:50:25.927846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.609 [2024-11-20 17:50:25.928049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.534 ms, result 0 00:20:02.609 true 00:20:02.609 17:50:25 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76502 00:20:02.609 17:50:25 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76502 ']' 00:20:02.609 17:50:25 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76502 00:20:02.609 17:50:25 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:02.609 17:50:25 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.609 17:50:25 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76502 00:20:02.609 killing process with pid 76502 00:20:02.609 17:50:25 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.609 17:50:25 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.609 17:50:25 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76502' 00:20:02.609 17:50:25 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76502 00:20:02.609 17:50:25 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76502 00:20:09.188 17:50:31 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:09.450 65536+0 records in 00:20:09.450 65536+0 records out 00:20:09.450 268435456 bytes (268 MB, 256 MiB) copied, 1.10491 s, 243 MB/s 00:20:09.450 17:50:32 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:09.711 [2024-11-20 17:50:33.005689] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:09.711 [2024-11-20 17:50:33.005822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76686 ] 00:20:09.711 [2024-11-20 17:50:33.168503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.972 [2024-11-20 17:50:33.284007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.233 [2024-11-20 17:50:33.577593] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:10.233 [2024-11-20 17:50:33.577671] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:10.233 [2024-11-20 17:50:33.739024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.233 [2024-11-20 17:50:33.739086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:10.233 [2024-11-20 17:50:33.739101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:10.233 [2024-11-20 17:50:33.739110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.233 [2024-11-20 17:50:33.742133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.233 [2024-11-20 17:50:33.742183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.233 [2024-11-20 17:50:33.742194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.002 ms 00:20:10.233 [2024-11-20 17:50:33.742202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.233 [2024-11-20 17:50:33.742371] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:10.233 [2024-11-20 17:50:33.743266] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:10.233 [2024-11-20 17:50:33.743316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.233 [2024-11-20 17:50:33.743325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.233 [2024-11-20 17:50:33.743336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:20:10.233 [2024-11-20 17:50:33.743344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.233 [2024-11-20 17:50:33.745032] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:10.233 [2024-11-20 17:50:33.758913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.233 [2024-11-20 17:50:33.758965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:10.233 [2024-11-20 17:50:33.758978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.883 ms 00:20:10.233 [2024-11-20 17:50:33.758986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.233 [2024-11-20 17:50:33.759101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.233 [2024-11-20 17:50:33.759114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:10.233 [2024-11-20 17:50:33.759124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:10.233 [2024-11-20 17:50:33.759132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.233 [2024-11-20 17:50:33.766981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.233 [2024-11-20 17:50:33.767020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.233 [2024-11-20 17:50:33.767030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.803 ms 00:20:10.233 [2024-11-20 17:50:33.767037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.233 [2024-11-20 17:50:33.767151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.233 [2024-11-20 17:50:33.767161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.233 [2024-11-20 17:50:33.767170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:10.233 [2024-11-20 17:50:33.767177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.233 [2024-11-20 17:50:33.767205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.233 [2024-11-20 17:50:33.767216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:10.233 [2024-11-20 17:50:33.767225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:10.233 [2024-11-20 17:50:33.767232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.233 [2024-11-20 17:50:33.767255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:10.497 [2024-11-20 17:50:33.771160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.497 [2024-11-20 17:50:33.771355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.497 [2024-11-20 17:50:33.771377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.912 ms 00:20:10.497 [2024-11-20 17:50:33.771385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.497 [2024-11-20 17:50:33.771466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.497 [2024-11-20 17:50:33.771477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:10.497 [2024-11-20 17:50:33.771486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:10.497 [2024-11-20 17:50:33.771493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.497 [2024-11-20 17:50:33.771514] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:10.497 [2024-11-20 17:50:33.771540] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:10.497 [2024-11-20 17:50:33.771577] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:10.497 [2024-11-20 17:50:33.771594] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:10.497 [2024-11-20 17:50:33.771701] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:10.497 [2024-11-20 17:50:33.771712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:10.497 [2024-11-20 17:50:33.771723] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:10.497 [2024-11-20 17:50:33.771733] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:10.497 [2024-11-20 17:50:33.771745] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:10.497 [2024-11-20 17:50:33.771754] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:10.497 [2024-11-20 17:50:33.771761] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:10.497 [2024-11-20 17:50:33.771769] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:10.497 [2024-11-20 17:50:33.771777] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:10.497 [2024-11-20 17:50:33.771784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.497 [2024-11-20 17:50:33.771791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:10.497 [2024-11-20 17:50:33.771799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:20:10.497 [2024-11-20 17:50:33.771806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.497 [2024-11-20 17:50:33.771909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.497 [2024-11-20 17:50:33.771921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:10.497 [2024-11-20 17:50:33.771930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:10.497 [2024-11-20 17:50:33.771937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.497 [2024-11-20 17:50:33.772039] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:10.497 [2024-11-20 17:50:33.772050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:10.497 [2024-11-20 17:50:33.772059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.497 [2024-11-20 17:50:33.772066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.497 [2024-11-20 17:50:33.772074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:10.497 [2024-11-20 17:50:33.772081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:10.497 [2024-11-20 17:50:33.772088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:10.497 [2024-11-20 17:50:33.772095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:10.497 [2024-11-20 17:50:33.772102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:10.497 [2024-11-20 17:50:33.772109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.497 [2024-11-20 17:50:33.772117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:10.497 [2024-11-20 17:50:33.772125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:10.497 [2024-11-20 17:50:33.772132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.497 [2024-11-20 17:50:33.772146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:10.497 [2024-11-20 17:50:33.772154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:10.497 [2024-11-20 17:50:33.772162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.497 [2024-11-20 17:50:33.772169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:10.497 [2024-11-20 17:50:33.772176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:10.497 [2024-11-20 17:50:33.772183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.497 [2024-11-20 17:50:33.772189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:10.497 [2024-11-20 17:50:33.772197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:10.497 [2024-11-20 17:50:33.772204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.497 [2024-11-20 17:50:33.772210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:10.498 [2024-11-20 17:50:33.772217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:10.498 [2024-11-20 17:50:33.772223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.498 [2024-11-20 17:50:33.772230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:10.498 [2024-11-20 17:50:33.772237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:10.498 [2024-11-20 17:50:33.772243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.498 [2024-11-20 17:50:33.772250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:10.498 [2024-11-20 17:50:33.772256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:10.498 [2024-11-20 17:50:33.772263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.498 [2024-11-20 17:50:33.772269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:10.498 [2024-11-20 17:50:33.772276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:10.498 [2024-11-20 17:50:33.772282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.498 [2024-11-20 17:50:33.772288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:10.498 [2024-11-20 17:50:33.772295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:10.498 [2024-11-20 17:50:33.772301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.498 [2024-11-20 17:50:33.772308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:10.498 [2024-11-20 17:50:33.772314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:10.498 [2024-11-20 17:50:33.772321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.498 [2024-11-20 17:50:33.772327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:10.498 [2024-11-20 17:50:33.772334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:10.498 [2024-11-20 17:50:33.772341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.498 [2024-11-20 17:50:33.772348] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:10.498 [2024-11-20 17:50:33.772356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:10.498 [2024-11-20 17:50:33.772363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.498 [2024-11-20 17:50:33.772373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.498 [2024-11-20 17:50:33.772381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:10.498 [2024-11-20 17:50:33.772389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:10.498 [2024-11-20 17:50:33.772395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:10.498 [2024-11-20 17:50:33.772403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:10.498 [2024-11-20 17:50:33.772409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:10.498 [2024-11-20 17:50:33.772416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:10.498 [2024-11-20 17:50:33.772424] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:10.498 [2024-11-20 17:50:33.772433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.498 [2024-11-20 17:50:33.772442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:10.498 [2024-11-20 17:50:33.772450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:10.498 [2024-11-20 17:50:33.772457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:10.498 [2024-11-20 17:50:33.772464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:10.498 [2024-11-20 17:50:33.772471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:10.498 [2024-11-20 17:50:33.772477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:10.498 [2024-11-20 17:50:33.772484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:10.498 [2024-11-20 17:50:33.772491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:10.498 [2024-11-20 17:50:33.772498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:10.498 [2024-11-20 17:50:33.772505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:10.498 [2024-11-20 17:50:33.772511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:10.498 [2024-11-20 17:50:33.772518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:10.498 [2024-11-20 17:50:33.772525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:10.498 [2024-11-20 17:50:33.772532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:10.498 [2024-11-20 17:50:33.772538] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:10.498 [2024-11-20 17:50:33.772547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.498 [2024-11-20 17:50:33.772555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:10.498 [2024-11-20 17:50:33.772561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:10.498 [2024-11-20 17:50:33.772568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:10.498 [2024-11-20 17:50:33.772577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:10.498 [2024-11-20 17:50:33.772584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.498 [2024-11-20 17:50:33.772592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:10.498 [2024-11-20 17:50:33.772603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:20:10.498 [2024-11-20 17:50:33.772634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.498 [2024-11-20 17:50:33.804034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.498 [2024-11-20 17:50:33.804080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.498 [2024-11-20 17:50:33.804091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.345 ms 00:20:10.498 [2024-11-20 17:50:33.804100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.498 [2024-11-20 17:50:33.804231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.498 [2024-11-20 17:50:33.804247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:10.498 [2024-11-20 17:50:33.804256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:10.498 [2024-11-20 17:50:33.804264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:33.849593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:33.849804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.499 [2024-11-20 17:50:33.849827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.306 ms 00:20:10.499 [2024-11-20 17:50:33.849843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:33.849991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:33.850005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:10.499 [2024-11-20 17:50:33.850015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:10.499 [2024-11-20 17:50:33.850023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:33.850606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:33.850637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:10.499 [2024-11-20 17:50:33.850647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:20:10.499 [2024-11-20 17:50:33.850664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:33.850824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:33.850843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:10.499 [2024-11-20 17:50:33.850852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:20:10.499 [2024-11-20 17:50:33.850860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:33.866802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:33.866849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:10.499 [2024-11-20 17:50:33.866860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.904 ms 00:20:10.499 [2024-11-20 17:50:33.866869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:33.881098] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:10.499 [2024-11-20 17:50:33.881150] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:10.499 [2024-11-20 17:50:33.881165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:33.881174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:10.499 [2024-11-20 17:50:33.881184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.158 ms 00:20:10.499 [2024-11-20 17:50:33.881191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:33.907500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:33.907547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:10.499 [2024-11-20 17:50:33.907570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.214 ms 00:20:10.499 [2024-11-20 17:50:33.907579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:33.920231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:33.920275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:10.499 [2024-11-20 17:50:33.920287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.555 ms 00:20:10.499 [2024-11-20 17:50:33.920293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:33.933100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:33.933155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:10.499 [2024-11-20 17:50:33.933167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.720 ms 00:20:10.499 [2024-11-20 17:50:33.933175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:33.933834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:33.933859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:10.499 [2024-11-20 17:50:33.933886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:20:10.499 [2024-11-20 17:50:33.933894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:33.998458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:33.998519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:10.499 [2024-11-20 17:50:33.998535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.535 ms 00:20:10.499 [2024-11-20 17:50:33.998545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:34.009929] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:10.499 [2024-11-20 17:50:34.028555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:34.028607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:10.499 [2024-11-20 17:50:34.028620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.901 ms 00:20:10.499 [2024-11-20 17:50:34.028629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:34.028729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:34.028744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:10.499 [2024-11-20 17:50:34.028754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:10.499 [2024-11-20 17:50:34.028763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:34.028822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:34.028831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:10.499 [2024-11-20 17:50:34.028840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:10.499 [2024-11-20 17:50:34.028849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:34.028918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:34.028928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:10.499 [2024-11-20 17:50:34.028939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:10.499 [2024-11-20 17:50:34.028948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.499 [2024-11-20 17:50:34.028986] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:10.499 [2024-11-20 17:50:34.028997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.499 [2024-11-20 17:50:34.029005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:10.499 [2024-11-20 17:50:34.029014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:10.499 [2024-11-20 17:50:34.029044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.762 [2024-11-20 17:50:34.055196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.762 [2024-11-20 17:50:34.055252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:10.762 [2024-11-20 17:50:34.055265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.129 ms 00:20:10.762 [2024-11-20 17:50:34.055274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.762 [2024-11-20 17:50:34.055412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.762 [2024-11-20 17:50:34.055425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:10.762 [2024-11-20 17:50:34.055435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:10.762 [2024-11-20 17:50:34.055443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.762 [2024-11-20 17:50:34.056965] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:10.762 [2024-11-20 17:50:34.060519] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 317.577 ms, result 0 00:20:10.762 [2024-11-20 17:50:34.061920] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:10.762 [2024-11-20 17:50:34.075639] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.726  [2024-11-20T17:50:36.209Z] Copying: 21/256 [MB] (21 MBps) [2024-11-20T17:50:37.151Z] Copying: 58/256 [MB] (37 MBps) [2024-11-20T17:50:38.094Z] Copying: 87/256 [MB] (28 MBps) [2024-11-20T17:50:39.479Z] Copying: 101/256 [MB] (14 MBps) [2024-11-20T17:50:40.423Z] Copying: 124/256 [MB] (22 MBps) [2024-11-20T17:50:41.367Z] Copying: 138/256 [MB] (13 MBps) [2024-11-20T17:50:42.311Z] Copying: 176/256 [MB] (38 MBps) [2024-11-20T17:50:43.258Z] Copying: 209/256 [MB] (32 MBps) [2024-11-20T17:50:44.203Z] Copying: 221/256 [MB] (12 MBps) [2024-11-20T17:50:44.777Z] Copying: 242/256 [MB] (20 MBps) [2024-11-20T17:50:44.777Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-20 17:50:44.709222] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:21.237 [2024-11-20 17:50:44.719912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.237 [2024-11-20 17:50:44.720104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:21.237 [2024-11-20 17:50:44.720179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:21.237 [2024-11-20 17:50:44.720204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.237 [2024-11-20 17:50:44.720259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:21.237 [2024-11-20 17:50:44.723328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.237 [2024-11-20 17:50:44.723487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:21.237 [2024-11-20 17:50:44.723558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:20:21.237 [2024-11-20 17:50:44.723583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.237 [2024-11-20 17:50:44.727027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.237 [2024-11-20 17:50:44.727194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:21.237 [2024-11-20 17:50:44.727262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.396 ms 00:20:21.237 [2024-11-20 17:50:44.727286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.237 [2024-11-20 17:50:44.734890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.237 [2024-11-20 17:50:44.735073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:21.237 [2024-11-20 17:50:44.735151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.568 ms 00:20:21.237 [2024-11-20 17:50:44.735176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.237 [2024-11-20 17:50:44.742150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.237 [2024-11-20 17:50:44.742316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:21.237 [2024-11-20 17:50:44.742375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.898 ms 00:20:21.237 [2024-11-20 17:50:44.742398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.237 [2024-11-20 17:50:44.768399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.237 [2024-11-20 17:50:44.768579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:21.237 [2024-11-20 17:50:44.768641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.908 ms 00:20:21.237 [2024-11-20 17:50:44.768662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.499 [2024-11-20 17:50:44.785355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.499 [2024-11-20 17:50:44.785539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:21.499 [2024-11-20 17:50:44.785617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.619 ms 00:20:21.499 [2024-11-20 17:50:44.785644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.499 [2024-11-20 17:50:44.786071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.499 [2024-11-20 17:50:44.786212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:21.499 [2024-11-20 17:50:44.786238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:21.499 [2024-11-20 17:50:44.786257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.499 [2024-11-20 17:50:44.812411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.499 [2024-11-20 17:50:44.812589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:21.499 [2024-11-20 17:50:44.812649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.122 ms 00:20:21.499 [2024-11-20 17:50:44.812670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.499 [2024-11-20 17:50:44.838792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.499 [2024-11-20 17:50:44.838978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:21.499 [2024-11-20 17:50:44.838999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.033 ms 00:20:21.499 [2024-11-20 17:50:44.839006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.499 [2024-11-20 17:50:44.864377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.499 [2024-11-20 17:50:44.864427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:21.499 [2024-11-20 17:50:44.864439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.312 ms 00:20:21.499 [2024-11-20 17:50:44.864446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.499 [2024-11-20 17:50:44.890102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.499 [2024-11-20 17:50:44.890283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:21.499 [2024-11-20 17:50:44.890304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.549 ms 00:20:21.499 [2024-11-20 17:50:44.890311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.499 [2024-11-20 17:50:44.890367] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:21.499 [2024-11-20 17:50:44.890390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:21.499 [2024-11-20 17:50:44.890400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:21.499 [2024-11-20 17:50:44.890408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:21.499 [2024-11-20 17:50:44.890439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:21.499 [2024-11-20 17:50:44.890448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:21.499 [2024-11-20 17:50:44.890455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:21.499 [2024-11-20 17:50:44.890463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.890995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:21.500 [2024-11-20 17:50:44.891129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:21.501 [2024-11-20 17:50:44.891136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:21.501 [2024-11-20 17:50:44.891144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:21.501 [2024-11-20 17:50:44.891155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:21.501 [2024-11-20 17:50:44.891163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:21.501 [2024-11-20 17:50:44.891170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:21.501 [2024-11-20 17:50:44.891186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:21.501 [2024-11-20 17:50:44.891193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:21.501 [2024-11-20 17:50:44.891214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:21.501 [2024-11-20 17:50:44.891221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:21.501 [2024-11-20 17:50:44.891237] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:21.501 [2024-11-20 17:50:44.891245] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e06a0caf-34b4-47ef-af8c-ec0a4fae16c0 00:20:21.501 [2024-11-20 17:50:44.891253] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:21.501 [2024-11-20 17:50:44.891260] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:21.501 [2024-11-20 17:50:44.891268] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:21.501 [2024-11-20 17:50:44.891276] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:21.501 [2024-11-20 17:50:44.891284] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:21.501 [2024-11-20 17:50:44.891292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:21.501 [2024-11-20 17:50:44.891299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:21.501 [2024-11-20 17:50:44.891305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:21.501 [2024-11-20 17:50:44.891312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:21.501 [2024-11-20 17:50:44.891319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.501 [2024-11-20 17:50:44.891327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:21.501 [2024-11-20 17:50:44.891339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:20:21.501 [2024-11-20 17:50:44.891347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.501 [2024-11-20 17:50:44.904942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.501 [2024-11-20 17:50:44.905116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:21.501 [2024-11-20 17:50:44.905134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.573 ms 00:20:21.501 [2024-11-20 17:50:44.905142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.501 [2024-11-20 17:50:44.905544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.501 [2024-11-20 17:50:44.905565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:21.501 [2024-11-20 17:50:44.905575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:20:21.501 [2024-11-20 17:50:44.905583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.501 [2024-11-20 17:50:44.944575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.501 [2024-11-20 17:50:44.944762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.501 [2024-11-20 17:50:44.944783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.501 [2024-11-20 17:50:44.944791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.501 [2024-11-20 17:50:44.944908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.501 [2024-11-20 17:50:44.944923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.501 [2024-11-20 17:50:44.944932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.501 [2024-11-20 17:50:44.944940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.501 [2024-11-20 17:50:44.944998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.501 [2024-11-20 17:50:44.945008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.501 [2024-11-20 17:50:44.945017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.501 [2024-11-20 17:50:44.945025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.501 [2024-11-20 17:50:44.945043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.501 [2024-11-20 17:50:44.945052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.501 [2024-11-20 17:50:44.945063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.501 [2024-11-20 17:50:44.945071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.501 [2024-11-20 17:50:45.028834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.501 [2024-11-20 17:50:45.028910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.501 [2024-11-20 17:50:45.028926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.501 [2024-11-20 17:50:45.028934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.762 [2024-11-20 17:50:45.098459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.762 [2024-11-20 17:50:45.098521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.762 [2024-11-20 17:50:45.098534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.762 [2024-11-20 17:50:45.098542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.762 [2024-11-20 17:50:45.098602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.762 [2024-11-20 17:50:45.098611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.762 [2024-11-20 17:50:45.098620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.762 [2024-11-20 17:50:45.098628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.762 [2024-11-20 17:50:45.098660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.762 [2024-11-20 17:50:45.098670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.762 [2024-11-20 17:50:45.098679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.762 [2024-11-20 17:50:45.098691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.762 [2024-11-20 17:50:45.098792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.762 [2024-11-20 17:50:45.098803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.762 [2024-11-20 17:50:45.098811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.762 [2024-11-20 17:50:45.098819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.762 [2024-11-20 17:50:45.098853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.762 [2024-11-20 17:50:45.098863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:21.762 [2024-11-20 17:50:45.098909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.762 [2024-11-20 17:50:45.098918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.762 [2024-11-20 17:50:45.098966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.762 [2024-11-20 17:50:45.098976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.762 [2024-11-20 17:50:45.098985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.762 [2024-11-20 17:50:45.098993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.762 [2024-11-20 17:50:45.099054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.762 [2024-11-20 17:50:45.099066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.762 [2024-11-20 17:50:45.099075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.762 [2024-11-20 17:50:45.099088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.762 [2024-11-20 17:50:45.099247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.333 ms, result 0 00:20:22.334 00:20:22.334 00:20:22.334 17:50:45 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76827 00:20:22.334 17:50:45 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76827 00:20:22.334 17:50:45 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76827 ']' 00:20:22.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.334 17:50:45 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.334 17:50:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.334 17:50:45 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.334 17:50:45 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.334 17:50:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:22.334 17:50:45 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:22.334 [2024-11-20 17:50:45.828499] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:22.334 [2024-11-20 17:50:45.829082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76827 ] 00:20:22.594 [2024-11-20 17:50:45.980644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.594 [2024-11-20 17:50:46.055334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.166 17:50:46 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.166 17:50:46 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:23.166 17:50:46 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:23.426 [2024-11-20 17:50:46.803283] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.426 [2024-11-20 17:50:46.803443] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.686 [2024-11-20 17:50:46.967401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.686 [2024-11-20 17:50:46.967533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:23.686 [2024-11-20 17:50:46.967608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:23.686 [2024-11-20 17:50:46.967628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.686 [2024-11-20 17:50:46.969701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.686 [2024-11-20 17:50:46.969805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:23.686 [2024-11-20 17:50:46.969858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.045 ms 00:20:23.686 [2024-11-20 17:50:46.969894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.686 [2024-11-20 17:50:46.970001] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:23.686 [2024-11-20 17:50:46.970713] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:23.686 [2024-11-20 17:50:46.970804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.686 [2024-11-20 17:50:46.970847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:23.687 [2024-11-20 17:50:46.970868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:20:23.687 [2024-11-20 17:50:46.970922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.687 [2024-11-20 17:50:46.971926] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:23.687 [2024-11-20 17:50:46.981595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.687 [2024-11-20 17:50:46.981694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:23.687 [2024-11-20 17:50:46.981740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.674 ms 00:20:23.687 [2024-11-20 17:50:46.981760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.687 [2024-11-20 17:50:46.981832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.687 [2024-11-20 17:50:46.981856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:23.687 [2024-11-20 17:50:46.981887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:23.687 [2024-11-20 17:50:46.981932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.687 [2024-11-20 17:50:46.986348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.687 [2024-11-20 17:50:46.986449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:23.687 [2024-11-20 17:50:46.986492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.363 ms 00:20:23.687 [2024-11-20 17:50:46.986511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.687 [2024-11-20 17:50:46.986596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.687 [2024-11-20 17:50:46.986618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:23.687 [2024-11-20 17:50:46.986634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:23.687 [2024-11-20 17:50:46.986676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.687 [2024-11-20 17:50:46.986711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.687 [2024-11-20 17:50:46.986730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:23.687 [2024-11-20 17:50:46.986781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:23.687 [2024-11-20 17:50:46.986830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.687 [2024-11-20 17:50:46.986861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:23.687 [2024-11-20 17:50:46.989483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.687 [2024-11-20 17:50:46.989563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:23.687 [2024-11-20 17:50:46.989608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:20:23.687 [2024-11-20 17:50:46.989625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.687 [2024-11-20 17:50:46.989665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.687 [2024-11-20 17:50:46.989783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:23.687 [2024-11-20 17:50:46.989804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:23.687 [2024-11-20 17:50:46.989820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.687 [2024-11-20 17:50:46.989847] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:23.687 [2024-11-20 17:50:46.989913] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:23.687 [2024-11-20 17:50:46.989964] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:23.687 [2024-11-20 17:50:46.989993] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:23.687 [2024-11-20 17:50:46.990116] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:23.687 [2024-11-20 17:50:46.990142] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:23.687 [2024-11-20 17:50:46.990202] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:23.687 [2024-11-20 17:50:46.990227] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:23.687 [2024-11-20 17:50:46.990252] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:23.687 [2024-11-20 17:50:46.990310] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:23.687 [2024-11-20 17:50:46.990342] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:23.687 [2024-11-20 17:50:46.990359] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:23.687 [2024-11-20 17:50:46.990377] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:23.687 [2024-11-20 17:50:46.990392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.687 [2024-11-20 17:50:46.990408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:23.687 [2024-11-20 17:50:46.990431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:20:23.687 [2024-11-20 17:50:46.990440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.687 [2024-11-20 17:50:46.990512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.687 [2024-11-20 17:50:46.990519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:23.687 [2024-11-20 17:50:46.990525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:23.687 [2024-11-20 17:50:46.990532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.687 [2024-11-20 17:50:46.990618] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:23.687 [2024-11-20 17:50:46.990628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:23.687 [2024-11-20 17:50:46.990634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.687 [2024-11-20 17:50:46.990641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:23.687 [2024-11-20 17:50:46.990654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:23.687 [2024-11-20 17:50:46.990668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:23.687 [2024-11-20 17:50:46.990674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.687 [2024-11-20 17:50:46.990685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:23.687 [2024-11-20 17:50:46.990691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:23.687 [2024-11-20 17:50:46.990696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.687 [2024-11-20 17:50:46.990702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:23.687 [2024-11-20 17:50:46.990707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:23.687 [2024-11-20 17:50:46.990713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:23.687 [2024-11-20 17:50:46.990725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:23.687 [2024-11-20 17:50:46.990730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:23.687 [2024-11-20 17:50:46.990745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.687 [2024-11-20 17:50:46.990756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:23.687 [2024-11-20 17:50:46.990763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.687 [2024-11-20 17:50:46.990776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:23.687 [2024-11-20 17:50:46.990781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.687 [2024-11-20 17:50:46.990792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:23.687 [2024-11-20 17:50:46.990798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.687 [2024-11-20 17:50:46.990811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:23.687 [2024-11-20 17:50:46.990816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.687 [2024-11-20 17:50:46.990828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:23.687 [2024-11-20 17:50:46.990834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:23.687 [2024-11-20 17:50:46.990839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.687 [2024-11-20 17:50:46.990845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:23.687 [2024-11-20 17:50:46.990850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:23.687 [2024-11-20 17:50:46.990857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:23.687 [2024-11-20 17:50:46.990878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:23.687 [2024-11-20 17:50:46.990884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990890] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:23.687 [2024-11-20 17:50:46.990898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:23.687 [2024-11-20 17:50:46.990904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.687 [2024-11-20 17:50:46.990910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.687 [2024-11-20 17:50:46.990917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:23.688 [2024-11-20 17:50:46.990922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:23.688 [2024-11-20 17:50:46.990929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:23.688 [2024-11-20 17:50:46.990934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:23.688 [2024-11-20 17:50:46.990940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:23.688 [2024-11-20 17:50:46.990945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:23.688 [2024-11-20 17:50:46.990952] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:23.688 [2024-11-20 17:50:46.990960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.688 [2024-11-20 17:50:46.990969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:23.688 [2024-11-20 17:50:46.990974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:23.688 [2024-11-20 17:50:46.990982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:23.688 [2024-11-20 17:50:46.990988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:23.688 [2024-11-20 17:50:46.990994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:23.688 [2024-11-20 17:50:46.990999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:23.688 [2024-11-20 17:50:46.991006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:23.688 [2024-11-20 17:50:46.991011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:23.688 [2024-11-20 17:50:46.991019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:23.688 [2024-11-20 17:50:46.991024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:23.688 [2024-11-20 17:50:46.991031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:23.688 [2024-11-20 17:50:46.991036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:23.688 [2024-11-20 17:50:46.991043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:23.688 [2024-11-20 17:50:46.991053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:23.688 [2024-11-20 17:50:46.991059] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:23.688 [2024-11-20 17:50:46.991065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.688 [2024-11-20 17:50:46.991074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:23.688 [2024-11-20 17:50:46.991079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:23.688 [2024-11-20 17:50:46.991086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:23.688 [2024-11-20 17:50:46.991092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:23.688 [2024-11-20 17:50:46.991099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:46.991104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:23.688 [2024-11-20 17:50:46.991111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:20:23.688 [2024-11-20 17:50:46.991116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.011934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.011960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:23.688 [2024-11-20 17:50:47.011970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.772 ms 00:20:23.688 [2024-11-20 17:50:47.011978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.012068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.012075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:23.688 [2024-11-20 17:50:47.012083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:23.688 [2024-11-20 17:50:47.012089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.035932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.035958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:23.688 [2024-11-20 17:50:47.035968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.824 ms 00:20:23.688 [2024-11-20 17:50:47.035974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.036017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.036024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:23.688 [2024-11-20 17:50:47.036032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:23.688 [2024-11-20 17:50:47.036037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.036329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.036348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:23.688 [2024-11-20 17:50:47.036358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:20:23.688 [2024-11-20 17:50:47.036364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.036463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.036472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:23.688 [2024-11-20 17:50:47.036479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:23.688 [2024-11-20 17:50:47.036485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.048047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.048071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:23.688 [2024-11-20 17:50:47.048080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.544 ms 00:20:23.688 [2024-11-20 17:50:47.048086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.075783] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:23.688 [2024-11-20 17:50:47.075816] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:23.688 [2024-11-20 17:50:47.075829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.075836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:23.688 [2024-11-20 17:50:47.075845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.668 ms 00:20:23.688 [2024-11-20 17:50:47.075851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.094362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.094480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:23.688 [2024-11-20 17:50:47.094498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.437 ms 00:20:23.688 [2024-11-20 17:50:47.094504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.103472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.103499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:23.688 [2024-11-20 17:50:47.103510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.910 ms 00:20:23.688 [2024-11-20 17:50:47.103515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.112125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.112150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:23.688 [2024-11-20 17:50:47.112159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.565 ms 00:20:23.688 [2024-11-20 17:50:47.112164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.112623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.112643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:23.688 [2024-11-20 17:50:47.112652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:20:23.688 [2024-11-20 17:50:47.112657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.156634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.156778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:23.688 [2024-11-20 17:50:47.156796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.957 ms 00:20:23.688 [2024-11-20 17:50:47.156803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.164748] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:23.688 [2024-11-20 17:50:47.176424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.176461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:23.688 [2024-11-20 17:50:47.176472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.563 ms 00:20:23.688 [2024-11-20 17:50:47.176479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.176550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.176559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:23.688 [2024-11-20 17:50:47.176566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:23.688 [2024-11-20 17:50:47.176573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.688 [2024-11-20 17:50:47.176608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.688 [2024-11-20 17:50:47.176617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:23.688 [2024-11-20 17:50:47.176622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:23.688 [2024-11-20 17:50:47.176632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.689 [2024-11-20 17:50:47.176649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.689 [2024-11-20 17:50:47.176657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:23.689 [2024-11-20 17:50:47.176662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:23.689 [2024-11-20 17:50:47.176671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.689 [2024-11-20 17:50:47.176695] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:23.689 [2024-11-20 17:50:47.176704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.689 [2024-11-20 17:50:47.176710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:23.689 [2024-11-20 17:50:47.176720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:23.689 [2024-11-20 17:50:47.176725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.689 [2024-11-20 17:50:47.194522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.689 [2024-11-20 17:50:47.194551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:23.689 [2024-11-20 17:50:47.194562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.777 ms 00:20:23.689 [2024-11-20 17:50:47.194568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.689 [2024-11-20 17:50:47.194639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.689 [2024-11-20 17:50:47.194647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:23.689 [2024-11-20 17:50:47.194655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:23.689 [2024-11-20 17:50:47.194662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.689 [2024-11-20 17:50:47.195307] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:23.689 [2024-11-20 17:50:47.197518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 227.691 ms, result 0 00:20:23.689 [2024-11-20 17:50:47.198444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:23.949 Some configs were skipped because the RPC state that can call them passed over. 00:20:23.949 17:50:47 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:23.949 [2024-11-20 17:50:47.426830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.949 [2024-11-20 17:50:47.426953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:23.949 [2024-11-20 17:50:47.427001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:20:23.949 [2024-11-20 17:50:47.427022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.949 [2024-11-20 17:50:47.427062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.724 ms, result 0 00:20:23.949 true 00:20:23.949 17:50:47 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:24.209 [2024-11-20 17:50:47.626792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.209 [2024-11-20 17:50:47.626902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:24.209 [2024-11-20 17:50:47.626954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:20:24.209 [2024-11-20 17:50:47.626973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.209 [2024-11-20 17:50:47.627012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.510 ms, result 0 00:20:24.209 true 00:20:24.209 17:50:47 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76827 00:20:24.209 17:50:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76827 ']' 00:20:24.209 17:50:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76827 00:20:24.209 17:50:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:24.209 17:50:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.209 17:50:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76827 00:20:24.209 killing process with pid 76827 00:20:24.209 17:50:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:24.209 17:50:47 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:24.209 17:50:47 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76827' 00:20:24.209 17:50:47 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76827 00:20:24.209 17:50:47 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76827 00:20:24.776 [2024-11-20 17:50:48.209250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.776 [2024-11-20 17:50:48.209297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:24.776 [2024-11-20 17:50:48.209307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:24.776 [2024-11-20 17:50:48.209314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.776 [2024-11-20 17:50:48.209333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:24.776 [2024-11-20 17:50:48.211487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.776 [2024-11-20 17:50:48.211512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:24.776 [2024-11-20 17:50:48.211523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.139 ms 00:20:24.776 [2024-11-20 17:50:48.211529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.776 [2024-11-20 17:50:48.211748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.776 [2024-11-20 17:50:48.211755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:24.776 [2024-11-20 17:50:48.211763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:20:24.776 [2024-11-20 17:50:48.211769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.776 [2024-11-20 17:50:48.214901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.776 [2024-11-20 17:50:48.214926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:24.776 [2024-11-20 17:50:48.214936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.114 ms 00:20:24.776 [2024-11-20 17:50:48.214941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.776 [2024-11-20 17:50:48.220152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.777 [2024-11-20 17:50:48.220306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:24.777 [2024-11-20 17:50:48.220322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.168 ms 00:20:24.777 [2024-11-20 17:50:48.220328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.777 [2024-11-20 17:50:48.227480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.777 [2024-11-20 17:50:48.227583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:24.777 [2024-11-20 17:50:48.227600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.105 ms 00:20:24.777 [2024-11-20 17:50:48.227611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.777 [2024-11-20 17:50:48.234341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.777 [2024-11-20 17:50:48.234478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:24.777 [2024-11-20 17:50:48.234531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.698 ms 00:20:24.777 [2024-11-20 17:50:48.234550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.777 [2024-11-20 17:50:48.234664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.777 [2024-11-20 17:50:48.234768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:24.777 [2024-11-20 17:50:48.234790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:24.777 [2024-11-20 17:50:48.234805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.777 [2024-11-20 17:50:48.242439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.777 [2024-11-20 17:50:48.242559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:24.777 [2024-11-20 17:50:48.242602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.608 ms 00:20:24.777 [2024-11-20 17:50:48.242619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.777 [2024-11-20 17:50:48.249993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.777 [2024-11-20 17:50:48.250019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:24.777 [2024-11-20 17:50:48.250029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.338 ms 00:20:24.777 [2024-11-20 17:50:48.250035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.777 [2024-11-20 17:50:48.257156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.777 [2024-11-20 17:50:48.257247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:24.777 [2024-11-20 17:50:48.257262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.091 ms 00:20:24.777 [2024-11-20 17:50:48.257267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.777 [2024-11-20 17:50:48.264088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.777 [2024-11-20 17:50:48.264175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:24.777 [2024-11-20 17:50:48.264188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.773 ms 00:20:24.777 [2024-11-20 17:50:48.264194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.777 [2024-11-20 17:50:48.264219] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:24.777 [2024-11-20 17:50:48.264231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:24.777 [2024-11-20 17:50:48.264627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:24.778 [2024-11-20 17:50:48.264892] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:24.778 [2024-11-20 17:50:48.264902] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e06a0caf-34b4-47ef-af8c-ec0a4fae16c0 00:20:24.778 [2024-11-20 17:50:48.264913] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:24.778 [2024-11-20 17:50:48.264922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:24.778 [2024-11-20 17:50:48.264928] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:24.778 [2024-11-20 17:50:48.264935] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:24.778 [2024-11-20 17:50:48.264940] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:24.778 [2024-11-20 17:50:48.264947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:24.778 [2024-11-20 17:50:48.264953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:24.778 [2024-11-20 17:50:48.264959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:24.778 [2024-11-20 17:50:48.264964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:24.778 [2024-11-20 17:50:48.264971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.778 [2024-11-20 17:50:48.264976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:24.778 [2024-11-20 17:50:48.264984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:20:24.778 [2024-11-20 17:50:48.264989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.778 [2024-11-20 17:50:48.274749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.778 [2024-11-20 17:50:48.274774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:24.778 [2024-11-20 17:50:48.274785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.741 ms 00:20:24.778 [2024-11-20 17:50:48.274791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.778 [2024-11-20 17:50:48.275102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.778 [2024-11-20 17:50:48.275117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:24.778 [2024-11-20 17:50:48.275125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:20:24.778 [2024-11-20 17:50:48.275132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.778 [2024-11-20 17:50:48.310289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.778 [2024-11-20 17:50:48.310389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.778 [2024-11-20 17:50:48.310403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.778 [2024-11-20 17:50:48.310409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.778 [2024-11-20 17:50:48.310494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.778 [2024-11-20 17:50:48.310502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.778 [2024-11-20 17:50:48.310509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.778 [2024-11-20 17:50:48.310516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.778 [2024-11-20 17:50:48.310554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.778 [2024-11-20 17:50:48.310561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.778 [2024-11-20 17:50:48.310570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.778 [2024-11-20 17:50:48.310576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.778 [2024-11-20 17:50:48.310590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.778 [2024-11-20 17:50:48.310596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.778 [2024-11-20 17:50:48.310603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.778 [2024-11-20 17:50:48.310608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.039 [2024-11-20 17:50:48.369826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.039 [2024-11-20 17:50:48.369951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:25.039 [2024-11-20 17:50:48.369967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.040 [2024-11-20 17:50:48.369973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.040 [2024-11-20 17:50:48.417851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.040 [2024-11-20 17:50:48.417967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:25.040 [2024-11-20 17:50:48.417983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.040 [2024-11-20 17:50:48.417991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.040 [2024-11-20 17:50:48.418052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.040 [2024-11-20 17:50:48.418060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.040 [2024-11-20 17:50:48.418069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.040 [2024-11-20 17:50:48.418075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.040 [2024-11-20 17:50:48.418100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.040 [2024-11-20 17:50:48.418107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.040 [2024-11-20 17:50:48.418114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.040 [2024-11-20 17:50:48.418120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.040 [2024-11-20 17:50:48.418194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.040 [2024-11-20 17:50:48.418201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.040 [2024-11-20 17:50:48.418209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.040 [2024-11-20 17:50:48.418215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.040 [2024-11-20 17:50:48.418241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.040 [2024-11-20 17:50:48.418248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:25.040 [2024-11-20 17:50:48.418256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.040 [2024-11-20 17:50:48.418262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.040 [2024-11-20 17:50:48.418293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.040 [2024-11-20 17:50:48.418299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.040 [2024-11-20 17:50:48.418308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.040 [2024-11-20 17:50:48.418314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.040 [2024-11-20 17:50:48.418348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.040 [2024-11-20 17:50:48.418355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.040 [2024-11-20 17:50:48.418363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.040 [2024-11-20 17:50:48.418369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.040 [2024-11-20 17:50:48.418483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 209.213 ms, result 0 00:20:25.639 17:50:48 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:25.639 17:50:48 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:25.639 [2024-11-20 17:50:49.001681] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:25.639 [2024-11-20 17:50:49.001805] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76870 ] 00:20:25.640 [2024-11-20 17:50:49.156404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.924 [2024-11-20 17:50:49.241351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.924 [2024-11-20 17:50:49.451048] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:25.924 [2024-11-20 17:50:49.451092] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:26.193 [2024-11-20 17:50:49.598758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.598795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:26.194 [2024-11-20 17:50:49.598806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:26.194 [2024-11-20 17:50:49.598812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.600886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.600914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:26.194 [2024-11-20 17:50:49.600922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:20:26.194 [2024-11-20 17:50:49.600927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.600985] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:26.194 [2024-11-20 17:50:49.601747] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:26.194 [2024-11-20 17:50:49.601785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.601792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:26.194 [2024-11-20 17:50:49.601800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:20:26.194 [2024-11-20 17:50:49.601805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.602915] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:26.194 [2024-11-20 17:50:49.612400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.612523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:26.194 [2024-11-20 17:50:49.612537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.487 ms 00:20:26.194 [2024-11-20 17:50:49.612543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.612606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.612615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:26.194 [2024-11-20 17:50:49.612622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:26.194 [2024-11-20 17:50:49.612628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.617062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.617087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:26.194 [2024-11-20 17:50:49.617094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.406 ms 00:20:26.194 [2024-11-20 17:50:49.617100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.617173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.617181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:26.194 [2024-11-20 17:50:49.617187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:26.194 [2024-11-20 17:50:49.617193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.617211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.617219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:26.194 [2024-11-20 17:50:49.617225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:26.194 [2024-11-20 17:50:49.617230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.617248] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:26.194 [2024-11-20 17:50:49.619853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.619970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:26.194 [2024-11-20 17:50:49.619982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.609 ms 00:20:26.194 [2024-11-20 17:50:49.619988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.620018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.620024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:26.194 [2024-11-20 17:50:49.620030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:26.194 [2024-11-20 17:50:49.620036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.620050] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:26.194 [2024-11-20 17:50:49.620066] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:26.194 [2024-11-20 17:50:49.620092] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:26.194 [2024-11-20 17:50:49.620103] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:26.194 [2024-11-20 17:50:49.620183] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:26.194 [2024-11-20 17:50:49.620191] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:26.194 [2024-11-20 17:50:49.620199] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:26.194 [2024-11-20 17:50:49.620206] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:26.194 [2024-11-20 17:50:49.620215] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:26.194 [2024-11-20 17:50:49.620221] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:26.194 [2024-11-20 17:50:49.620227] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:26.194 [2024-11-20 17:50:49.620232] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:26.194 [2024-11-20 17:50:49.620237] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:26.194 [2024-11-20 17:50:49.620243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.620249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:26.194 [2024-11-20 17:50:49.620258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:20:26.194 [2024-11-20 17:50:49.620263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.620329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.194 [2024-11-20 17:50:49.620337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:26.194 [2024-11-20 17:50:49.620343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:26.194 [2024-11-20 17:50:49.620348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.194 [2024-11-20 17:50:49.620422] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:26.194 [2024-11-20 17:50:49.620429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:26.194 [2024-11-20 17:50:49.620435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:26.194 [2024-11-20 17:50:49.620441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.194 [2024-11-20 17:50:49.620446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:26.194 [2024-11-20 17:50:49.620452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:26.194 [2024-11-20 17:50:49.620457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:26.194 [2024-11-20 17:50:49.620462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:26.194 [2024-11-20 17:50:49.620468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:26.194 [2024-11-20 17:50:49.620473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:26.194 [2024-11-20 17:50:49.620478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:26.194 [2024-11-20 17:50:49.620483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:26.194 [2024-11-20 17:50:49.620488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:26.194 [2024-11-20 17:50:49.620497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:26.194 [2024-11-20 17:50:49.620503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:26.194 [2024-11-20 17:50:49.620507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.194 [2024-11-20 17:50:49.620512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:26.194 [2024-11-20 17:50:49.620517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:26.194 [2024-11-20 17:50:49.620523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.195 [2024-11-20 17:50:49.620528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:26.195 [2024-11-20 17:50:49.620533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:26.195 [2024-11-20 17:50:49.620538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.195 [2024-11-20 17:50:49.620543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:26.195 [2024-11-20 17:50:49.620548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:26.195 [2024-11-20 17:50:49.620552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.195 [2024-11-20 17:50:49.620557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:26.195 [2024-11-20 17:50:49.620562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:26.195 [2024-11-20 17:50:49.620567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.195 [2024-11-20 17:50:49.620572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:26.195 [2024-11-20 17:50:49.620577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:26.195 [2024-11-20 17:50:49.620582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.195 [2024-11-20 17:50:49.620586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:26.195 [2024-11-20 17:50:49.620591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:26.195 [2024-11-20 17:50:49.620596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:26.195 [2024-11-20 17:50:49.620601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:26.195 [2024-11-20 17:50:49.620606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:26.195 [2024-11-20 17:50:49.620611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:26.195 [2024-11-20 17:50:49.620616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:26.195 [2024-11-20 17:50:49.620620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:26.195 [2024-11-20 17:50:49.620625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.195 [2024-11-20 17:50:49.620630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:26.195 [2024-11-20 17:50:49.620635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:26.195 [2024-11-20 17:50:49.620640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.195 [2024-11-20 17:50:49.620645] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:26.195 [2024-11-20 17:50:49.620651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:26.195 [2024-11-20 17:50:49.620657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:26.195 [2024-11-20 17:50:49.620664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.195 [2024-11-20 17:50:49.620669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:26.195 [2024-11-20 17:50:49.620674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:26.195 [2024-11-20 17:50:49.620679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:26.195 [2024-11-20 17:50:49.620686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:26.195 [2024-11-20 17:50:49.620691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:26.195 [2024-11-20 17:50:49.620696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:26.195 [2024-11-20 17:50:49.620702] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:26.195 [2024-11-20 17:50:49.620708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:26.195 [2024-11-20 17:50:49.620714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:26.195 [2024-11-20 17:50:49.620720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:26.195 [2024-11-20 17:50:49.620725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:26.195 [2024-11-20 17:50:49.620731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:26.195 [2024-11-20 17:50:49.620736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:26.195 [2024-11-20 17:50:49.620741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:26.195 [2024-11-20 17:50:49.620746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:26.195 [2024-11-20 17:50:49.620751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:26.195 [2024-11-20 17:50:49.620757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:26.195 [2024-11-20 17:50:49.620762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:26.195 [2024-11-20 17:50:49.620767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:26.195 [2024-11-20 17:50:49.620773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:26.195 [2024-11-20 17:50:49.620778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:26.195 [2024-11-20 17:50:49.620783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:26.195 [2024-11-20 17:50:49.620788] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:26.195 [2024-11-20 17:50:49.620794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:26.195 [2024-11-20 17:50:49.620801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:26.195 [2024-11-20 17:50:49.620807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:26.195 [2024-11-20 17:50:49.620813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:26.195 [2024-11-20 17:50:49.620818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:26.195 [2024-11-20 17:50:49.620824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.195 [2024-11-20 17:50:49.620829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:26.195 [2024-11-20 17:50:49.620837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:20:26.195 [2024-11-20 17:50:49.620842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.195 [2024-11-20 17:50:49.641637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.195 [2024-11-20 17:50:49.641664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:26.195 [2024-11-20 17:50:49.641672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.748 ms 00:20:26.195 [2024-11-20 17:50:49.641678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.195 [2024-11-20 17:50:49.641769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.195 [2024-11-20 17:50:49.641779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:26.195 [2024-11-20 17:50:49.641785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:26.195 [2024-11-20 17:50:49.641791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.195 [2024-11-20 17:50:49.682986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.195 [2024-11-20 17:50:49.683019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.196 [2024-11-20 17:50:49.683028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.180 ms 00:20:26.196 [2024-11-20 17:50:49.683037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.196 [2024-11-20 17:50:49.683096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.196 [2024-11-20 17:50:49.683105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.196 [2024-11-20 17:50:49.683112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:26.196 [2024-11-20 17:50:49.683117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.196 [2024-11-20 17:50:49.683407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.196 [2024-11-20 17:50:49.683425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.196 [2024-11-20 17:50:49.683433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:26.196 [2024-11-20 17:50:49.683438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.196 [2024-11-20 17:50:49.683547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.196 [2024-11-20 17:50:49.683554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.196 [2024-11-20 17:50:49.683561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:26.196 [2024-11-20 17:50:49.683567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.196 [2024-11-20 17:50:49.694454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.196 [2024-11-20 17:50:49.694570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.196 [2024-11-20 17:50:49.694584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.871 ms 00:20:26.196 [2024-11-20 17:50:49.694590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.196 [2024-11-20 17:50:49.704298] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:26.196 [2024-11-20 17:50:49.704326] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:26.196 [2024-11-20 17:50:49.704335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.196 [2024-11-20 17:50:49.704341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:26.196 [2024-11-20 17:50:49.704347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.654 ms 00:20:26.196 [2024-11-20 17:50:49.704353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.196 [2024-11-20 17:50:49.722846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.196 [2024-11-20 17:50:49.722886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:26.196 [2024-11-20 17:50:49.722895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.445 ms 00:20:26.196 [2024-11-20 17:50:49.722901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.731689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.457 [2024-11-20 17:50:49.731716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:26.457 [2024-11-20 17:50:49.731723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.734 ms 00:20:26.457 [2024-11-20 17:50:49.731729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.740576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.457 [2024-11-20 17:50:49.740610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:26.457 [2024-11-20 17:50:49.740617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.806 ms 00:20:26.457 [2024-11-20 17:50:49.740623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.741099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.457 [2024-11-20 17:50:49.741191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:26.457 [2024-11-20 17:50:49.741203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:20:26.457 [2024-11-20 17:50:49.741208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.785099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.457 [2024-11-20 17:50:49.785134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:26.457 [2024-11-20 17:50:49.785144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.869 ms 00:20:26.457 [2024-11-20 17:50:49.785151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.793537] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:26.457 [2024-11-20 17:50:49.805124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.457 [2024-11-20 17:50:49.805152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:26.457 [2024-11-20 17:50:49.805162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.913 ms 00:20:26.457 [2024-11-20 17:50:49.805172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.805239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.457 [2024-11-20 17:50:49.805247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:26.457 [2024-11-20 17:50:49.805254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:26.457 [2024-11-20 17:50:49.805260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.805296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.457 [2024-11-20 17:50:49.805302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:26.457 [2024-11-20 17:50:49.805309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:26.457 [2024-11-20 17:50:49.805315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.805341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.457 [2024-11-20 17:50:49.805348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:26.457 [2024-11-20 17:50:49.805354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:26.457 [2024-11-20 17:50:49.805360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.805383] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:26.457 [2024-11-20 17:50:49.805390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.457 [2024-11-20 17:50:49.805396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:26.457 [2024-11-20 17:50:49.805402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:26.457 [2024-11-20 17:50:49.805408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.823390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.457 [2024-11-20 17:50:49.823418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:26.457 [2024-11-20 17:50:49.823427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.966 ms 00:20:26.457 [2024-11-20 17:50:49.823434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.823507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.457 [2024-11-20 17:50:49.823515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:26.457 [2024-11-20 17:50:49.823522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:26.457 [2024-11-20 17:50:49.823528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.457 [2024-11-20 17:50:49.824189] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:26.457 [2024-11-20 17:50:49.826435] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.190 ms, result 0 00:20:26.457 [2024-11-20 17:50:49.827109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:26.457 [2024-11-20 17:50:49.841727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:27.415  [2024-11-20T17:50:51.901Z] Copying: 24/256 [MB] (24 MBps) [2024-11-20T17:50:53.282Z] Copying: 40/256 [MB] (15 MBps) [2024-11-20T17:50:53.850Z] Copying: 61/256 [MB] (21 MBps) [2024-11-20T17:50:55.225Z] Copying: 83/256 [MB] (21 MBps) [2024-11-20T17:50:56.157Z] Copying: 105/256 [MB] (22 MBps) [2024-11-20T17:50:57.090Z] Copying: 126/256 [MB] (20 MBps) [2024-11-20T17:50:58.022Z] Copying: 155/256 [MB] (29 MBps) [2024-11-20T17:50:58.955Z] Copying: 172/256 [MB] (16 MBps) [2024-11-20T17:50:59.897Z] Copying: 196/256 [MB] (23 MBps) [2024-11-20T17:51:01.285Z] Copying: 216/256 [MB] (20 MBps) [2024-11-20T17:51:01.857Z] Copying: 230/256 [MB] (14 MBps) [2024-11-20T17:51:02.120Z] Copying: 254/256 [MB] (23 MBps) [2024-11-20T17:51:02.120Z] Copying: 256/256 [MB] (average 21 MBps)[2024-11-20 17:51:01.886186] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:38.580 [2024-11-20 17:51:01.893508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.580 [2024-11-20 17:51:01.893626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:38.580 [2024-11-20 17:51:01.893641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:38.580 [2024-11-20 17:51:01.893654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.580 [2024-11-20 17:51:01.893675] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:38.580 [2024-11-20 17:51:01.895793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.580 [2024-11-20 17:51:01.895818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:38.580 [2024-11-20 17:51:01.895827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:20:38.580 [2024-11-20 17:51:01.895833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.580 [2024-11-20 17:51:01.896038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.580 [2024-11-20 17:51:01.896048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:38.580 [2024-11-20 17:51:01.896055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:20:38.580 [2024-11-20 17:51:01.896061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.580 [2024-11-20 17:51:01.898833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.580 [2024-11-20 17:51:01.898853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:38.580 [2024-11-20 17:51:01.898860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:20:38.580 [2024-11-20 17:51:01.898866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.580 [2024-11-20 17:51:01.904203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.580 [2024-11-20 17:51:01.904302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:38.580 [2024-11-20 17:51:01.904314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.314 ms 00:20:38.580 [2024-11-20 17:51:01.904320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.580 [2024-11-20 17:51:01.921955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.580 [2024-11-20 17:51:01.921984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:38.580 [2024-11-20 17:51:01.921992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.591 ms 00:20:38.580 [2024-11-20 17:51:01.921997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.580 [2024-11-20 17:51:01.933554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.580 [2024-11-20 17:51:01.933583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:38.580 [2024-11-20 17:51:01.933594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.528 ms 00:20:38.580 [2024-11-20 17:51:01.933600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.580 [2024-11-20 17:51:01.933694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.581 [2024-11-20 17:51:01.933701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:38.581 [2024-11-20 17:51:01.933707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:38.581 [2024-11-20 17:51:01.933713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.581 [2024-11-20 17:51:01.951634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.581 [2024-11-20 17:51:01.951740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:38.581 [2024-11-20 17:51:01.951753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.903 ms 00:20:38.581 [2024-11-20 17:51:01.951758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.581 [2024-11-20 17:51:01.969500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.581 [2024-11-20 17:51:01.969525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:38.581 [2024-11-20 17:51:01.969533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.709 ms 00:20:38.581 [2024-11-20 17:51:01.969538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.581 [2024-11-20 17:51:01.987012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.581 [2024-11-20 17:51:01.987038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:38.581 [2024-11-20 17:51:01.987046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.445 ms 00:20:38.581 [2024-11-20 17:51:01.987051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.581 [2024-11-20 17:51:02.004161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.581 [2024-11-20 17:51:02.004186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:38.581 [2024-11-20 17:51:02.004193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.062 ms 00:20:38.581 [2024-11-20 17:51:02.004199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.581 [2024-11-20 17:51:02.004227] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:38.581 [2024-11-20 17:51:02.004238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:38.581 [2024-11-20 17:51:02.004636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:38.582 [2024-11-20 17:51:02.004818] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:38.582 [2024-11-20 17:51:02.004824] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e06a0caf-34b4-47ef-af8c-ec0a4fae16c0 00:20:38.582 [2024-11-20 17:51:02.004831] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:38.582 [2024-11-20 17:51:02.004836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:38.582 [2024-11-20 17:51:02.004842] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:38.582 [2024-11-20 17:51:02.004847] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:38.582 [2024-11-20 17:51:02.004853] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:38.582 [2024-11-20 17:51:02.004859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:38.582 [2024-11-20 17:51:02.004864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:38.582 [2024-11-20 17:51:02.004883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:38.582 [2024-11-20 17:51:02.004889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:38.582 [2024-11-20 17:51:02.004894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.582 [2024-11-20 17:51:02.004912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:38.582 [2024-11-20 17:51:02.004919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:20:38.582 [2024-11-20 17:51:02.004925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.582 [2024-11-20 17:51:02.014383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.582 [2024-11-20 17:51:02.014411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:38.582 [2024-11-20 17:51:02.014420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.444 ms 00:20:38.582 [2024-11-20 17:51:02.014426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.582 [2024-11-20 17:51:02.014716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.582 [2024-11-20 17:51:02.014732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:38.582 [2024-11-20 17:51:02.014739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:20:38.582 [2024-11-20 17:51:02.014745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.582 [2024-11-20 17:51:02.042336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.582 [2024-11-20 17:51:02.042364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.582 [2024-11-20 17:51:02.042371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.582 [2024-11-20 17:51:02.042377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.582 [2024-11-20 17:51:02.042441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.582 [2024-11-20 17:51:02.042448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.582 [2024-11-20 17:51:02.042455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.582 [2024-11-20 17:51:02.042460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.582 [2024-11-20 17:51:02.042491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.582 [2024-11-20 17:51:02.042498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.582 [2024-11-20 17:51:02.042503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.582 [2024-11-20 17:51:02.042509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.582 [2024-11-20 17:51:02.042521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.582 [2024-11-20 17:51:02.042529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.582 [2024-11-20 17:51:02.042535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.582 [2024-11-20 17:51:02.042540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.582 [2024-11-20 17:51:02.102686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.582 [2024-11-20 17:51:02.102718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.582 [2024-11-20 17:51:02.102728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.582 [2024-11-20 17:51:02.102734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.844 [2024-11-20 17:51:02.151627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.844 [2024-11-20 17:51:02.151661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.844 [2024-11-20 17:51:02.151669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.844 [2024-11-20 17:51:02.151676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.844 [2024-11-20 17:51:02.151727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.844 [2024-11-20 17:51:02.151734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.844 [2024-11-20 17:51:02.151741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.844 [2024-11-20 17:51:02.151746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.844 [2024-11-20 17:51:02.151769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.844 [2024-11-20 17:51:02.151775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.844 [2024-11-20 17:51:02.151785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.844 [2024-11-20 17:51:02.151790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.844 [2024-11-20 17:51:02.151860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.844 [2024-11-20 17:51:02.151887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.844 [2024-11-20 17:51:02.151894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.844 [2024-11-20 17:51:02.151900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.844 [2024-11-20 17:51:02.151925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.844 [2024-11-20 17:51:02.151950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:38.844 [2024-11-20 17:51:02.151957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.844 [2024-11-20 17:51:02.151965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.844 [2024-11-20 17:51:02.151993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.844 [2024-11-20 17:51:02.151999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.844 [2024-11-20 17:51:02.152006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.844 [2024-11-20 17:51:02.152012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.844 [2024-11-20 17:51:02.152045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.844 [2024-11-20 17:51:02.152052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.844 [2024-11-20 17:51:02.152061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.844 [2024-11-20 17:51:02.152067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.844 [2024-11-20 17:51:02.152172] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 258.656 ms, result 0 00:20:39.416 00:20:39.416 00:20:39.416 17:51:02 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:39.416 17:51:02 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:39.987 17:51:03 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:39.987 [2024-11-20 17:51:03.344231] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:39.987 [2024-11-20 17:51:03.344522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77024 ] 00:20:39.987 [2024-11-20 17:51:03.500906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.246 [2024-11-20 17:51:03.578641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.506 [2024-11-20 17:51:03.789634] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.506 [2024-11-20 17:51:03.789679] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.506 [2024-11-20 17:51:03.937273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.937310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.506 [2024-11-20 17:51:03.937321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:40.506 [2024-11-20 17:51:03.937327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.939400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.939533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.506 [2024-11-20 17:51:03.939546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.061 ms 00:20:40.506 [2024-11-20 17:51:03.939552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.939657] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.506 [2024-11-20 17:51:03.940227] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.506 [2024-11-20 17:51:03.940244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.940251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.506 [2024-11-20 17:51:03.940258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:20:40.506 [2024-11-20 17:51:03.940264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.941238] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.506 [2024-11-20 17:51:03.950716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.950835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.506 [2024-11-20 17:51:03.950849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.479 ms 00:20:40.506 [2024-11-20 17:51:03.950855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.950931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.950940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.506 [2024-11-20 17:51:03.950947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:40.506 [2024-11-20 17:51:03.950953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.955358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.955383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.506 [2024-11-20 17:51:03.955390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.376 ms 00:20:40.506 [2024-11-20 17:51:03.955396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.955472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.955480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.506 [2024-11-20 17:51:03.955486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:40.506 [2024-11-20 17:51:03.955492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.955508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.955515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.506 [2024-11-20 17:51:03.955522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:40.506 [2024-11-20 17:51:03.955528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.955545] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:40.506 [2024-11-20 17:51:03.958092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.958192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.506 [2024-11-20 17:51:03.958204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:20:40.506 [2024-11-20 17:51:03.958210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.958240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.958247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.506 [2024-11-20 17:51:03.958254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:40.506 [2024-11-20 17:51:03.958259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.958272] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.506 [2024-11-20 17:51:03.958289] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:40.506 [2024-11-20 17:51:03.958314] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.506 [2024-11-20 17:51:03.958326] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:40.506 [2024-11-20 17:51:03.958410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:40.506 [2024-11-20 17:51:03.958419] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.506 [2024-11-20 17:51:03.958427] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:40.506 [2024-11-20 17:51:03.958435] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.506 [2024-11-20 17:51:03.958444] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.506 [2024-11-20 17:51:03.958450] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:40.506 [2024-11-20 17:51:03.958456] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.506 [2024-11-20 17:51:03.958461] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:40.506 [2024-11-20 17:51:03.958467] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:40.506 [2024-11-20 17:51:03.958472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.958478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.506 [2024-11-20 17:51:03.958484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:20:40.506 [2024-11-20 17:51:03.958489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.958556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.506 [2024-11-20 17:51:03.958564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.506 [2024-11-20 17:51:03.958570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:40.506 [2024-11-20 17:51:03.958575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.506 [2024-11-20 17:51:03.958650] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.506 [2024-11-20 17:51:03.958657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.506 [2024-11-20 17:51:03.958663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.506 [2024-11-20 17:51:03.958669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.506 [2024-11-20 17:51:03.958675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.506 [2024-11-20 17:51:03.958680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.506 [2024-11-20 17:51:03.958685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:40.506 [2024-11-20 17:51:03.958690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.506 [2024-11-20 17:51:03.958696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:40.506 [2024-11-20 17:51:03.958701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.506 [2024-11-20 17:51:03.958707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.506 [2024-11-20 17:51:03.958712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:40.507 [2024-11-20 17:51:03.958717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.507 [2024-11-20 17:51:03.958727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.507 [2024-11-20 17:51:03.958733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:40.507 [2024-11-20 17:51:03.958738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.507 [2024-11-20 17:51:03.958743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.507 [2024-11-20 17:51:03.958748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:40.507 [2024-11-20 17:51:03.958753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.507 [2024-11-20 17:51:03.958758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.507 [2024-11-20 17:51:03.958763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:40.507 [2024-11-20 17:51:03.958768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.507 [2024-11-20 17:51:03.958773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.507 [2024-11-20 17:51:03.958778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:40.507 [2024-11-20 17:51:03.958782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.507 [2024-11-20 17:51:03.958787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.507 [2024-11-20 17:51:03.958792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:40.507 [2024-11-20 17:51:03.958797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.507 [2024-11-20 17:51:03.958802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.507 [2024-11-20 17:51:03.958807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:40.507 [2024-11-20 17:51:03.958811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.507 [2024-11-20 17:51:03.958817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.507 [2024-11-20 17:51:03.958821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:40.507 [2024-11-20 17:51:03.958826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.507 [2024-11-20 17:51:03.958831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.507 [2024-11-20 17:51:03.958836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:40.507 [2024-11-20 17:51:03.958841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.507 [2024-11-20 17:51:03.958847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:40.507 [2024-11-20 17:51:03.958852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:40.507 [2024-11-20 17:51:03.958857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.507 [2024-11-20 17:51:03.958862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:40.507 [2024-11-20 17:51:03.958867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:40.507 [2024-11-20 17:51:03.958882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.507 [2024-11-20 17:51:03.958888] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.507 [2024-11-20 17:51:03.958893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.507 [2024-11-20 17:51:03.958899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.507 [2024-11-20 17:51:03.958907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.507 [2024-11-20 17:51:03.958913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.507 [2024-11-20 17:51:03.958919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.507 [2024-11-20 17:51:03.958924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.507 [2024-11-20 17:51:03.958929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.507 [2024-11-20 17:51:03.958934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.507 [2024-11-20 17:51:03.958939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.507 [2024-11-20 17:51:03.958946] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.507 [2024-11-20 17:51:03.958953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.507 [2024-11-20 17:51:03.958960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:40.507 [2024-11-20 17:51:03.958965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:40.507 [2024-11-20 17:51:03.958971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:40.507 [2024-11-20 17:51:03.958976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:40.507 [2024-11-20 17:51:03.958981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:40.507 [2024-11-20 17:51:03.958987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:40.507 [2024-11-20 17:51:03.958992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:40.507 [2024-11-20 17:51:03.958998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:40.507 [2024-11-20 17:51:03.959003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:40.507 [2024-11-20 17:51:03.959008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:40.507 [2024-11-20 17:51:03.959014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:40.507 [2024-11-20 17:51:03.959019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:40.507 [2024-11-20 17:51:03.959024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:40.507 [2024-11-20 17:51:03.959030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:40.507 [2024-11-20 17:51:03.959035] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.507 [2024-11-20 17:51:03.959041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.507 [2024-11-20 17:51:03.959047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.507 [2024-11-20 17:51:03.959052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.507 [2024-11-20 17:51:03.959059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.507 [2024-11-20 17:51:03.959064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.507 [2024-11-20 17:51:03.959070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.507 [2024-11-20 17:51:03.959075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.507 [2024-11-20 17:51:03.959083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:20:40.507 [2024-11-20 17:51:03.959088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.507 [2024-11-20 17:51:03.979849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.507 [2024-11-20 17:51:03.979890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.507 [2024-11-20 17:51:03.979899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.722 ms 00:20:40.507 [2024-11-20 17:51:03.979905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.507 [2024-11-20 17:51:03.979998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.507 [2024-11-20 17:51:03.980009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.507 [2024-11-20 17:51:03.980015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:40.507 [2024-11-20 17:51:03.980020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.507 [2024-11-20 17:51:04.023723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.507 [2024-11-20 17:51:04.023755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.508 [2024-11-20 17:51:04.023765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.687 ms 00:20:40.508 [2024-11-20 17:51:04.023773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.508 [2024-11-20 17:51:04.023831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.508 [2024-11-20 17:51:04.023840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.508 [2024-11-20 17:51:04.023846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:40.508 [2024-11-20 17:51:04.023852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.508 [2024-11-20 17:51:04.024158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.508 [2024-11-20 17:51:04.024171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.508 [2024-11-20 17:51:04.024178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:40.508 [2024-11-20 17:51:04.024184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.508 [2024-11-20 17:51:04.024291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.508 [2024-11-20 17:51:04.024298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.508 [2024-11-20 17:51:04.024305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:40.508 [2024-11-20 17:51:04.024310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.508 [2024-11-20 17:51:04.035071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.508 [2024-11-20 17:51:04.035096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.508 [2024-11-20 17:51:04.035104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.744 ms 00:20:40.508 [2024-11-20 17:51:04.035110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-11-20 17:51:04.044717] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:40.766 [2024-11-20 17:51:04.044744] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:40.766 [2024-11-20 17:51:04.044754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-11-20 17:51:04.044761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:40.766 [2024-11-20 17:51:04.044767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.558 ms 00:20:40.766 [2024-11-20 17:51:04.044773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-11-20 17:51:04.063107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-11-20 17:51:04.063143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:40.766 [2024-11-20 17:51:04.063152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.285 ms 00:20:40.766 [2024-11-20 17:51:04.063157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-11-20 17:51:04.071972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-11-20 17:51:04.071998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:40.766 [2024-11-20 17:51:04.072005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.760 ms 00:20:40.766 [2024-11-20 17:51:04.072011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-11-20 17:51:04.080512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-11-20 17:51:04.080534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:40.766 [2024-11-20 17:51:04.080542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.459 ms 00:20:40.766 [2024-11-20 17:51:04.080547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-11-20 17:51:04.081014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-11-20 17:51:04.081034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.766 [2024-11-20 17:51:04.081041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:20:40.766 [2024-11-20 17:51:04.081047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-11-20 17:51:04.124474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-11-20 17:51:04.124516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:40.766 [2024-11-20 17:51:04.124527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.410 ms 00:20:40.766 [2024-11-20 17:51:04.124534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-11-20 17:51:04.133096] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:40.766 [2024-11-20 17:51:04.144808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-11-20 17:51:04.144844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:40.766 [2024-11-20 17:51:04.144854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.206 ms 00:20:40.766 [2024-11-20 17:51:04.144863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-11-20 17:51:04.144951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-11-20 17:51:04.144959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:40.766 [2024-11-20 17:51:04.144966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:40.766 [2024-11-20 17:51:04.144972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-11-20 17:51:04.145008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-11-20 17:51:04.145015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:40.766 [2024-11-20 17:51:04.145022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:40.767 [2024-11-20 17:51:04.145027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.767 [2024-11-20 17:51:04.145055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.767 [2024-11-20 17:51:04.145062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:40.767 [2024-11-20 17:51:04.145068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:40.767 [2024-11-20 17:51:04.145074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.767 [2024-11-20 17:51:04.145097] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:40.767 [2024-11-20 17:51:04.145104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.767 [2024-11-20 17:51:04.145110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:40.767 [2024-11-20 17:51:04.145116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:40.767 [2024-11-20 17:51:04.145121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.767 [2024-11-20 17:51:04.162915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.767 [2024-11-20 17:51:04.162943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:40.767 [2024-11-20 17:51:04.162952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.779 ms 00:20:40.767 [2024-11-20 17:51:04.162958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.767 [2024-11-20 17:51:04.163031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.767 [2024-11-20 17:51:04.163039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:40.767 [2024-11-20 17:51:04.163046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:40.767 [2024-11-20 17:51:04.163052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.767 [2024-11-20 17:51:04.163668] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.767 [2024-11-20 17:51:04.165965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.180 ms, result 0 00:20:40.767 [2024-11-20 17:51:04.166618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:40.767 [2024-11-20 17:51:04.181687] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:41.026  [2024-11-20T17:51:04.566Z] Copying: 4096/4096 [kB] (average 30 MBps)[2024-11-20 17:51:04.316794] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:41.026 [2024-11-20 17:51:04.325289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.026 [2024-11-20 17:51:04.325395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:41.026 [2024-11-20 17:51:04.325444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:41.026 [2024-11-20 17:51:04.325473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.026 [2024-11-20 17:51:04.325508] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:41.026 [2024-11-20 17:51:04.328115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.026 [2024-11-20 17:51:04.328208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:41.026 [2024-11-20 17:51:04.328254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.572 ms 00:20:41.026 [2024-11-20 17:51:04.328276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.026 [2024-11-20 17:51:04.331024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.026 [2024-11-20 17:51:04.331112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:41.026 [2024-11-20 17:51:04.331157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:20:41.026 [2024-11-20 17:51:04.331177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.026 [2024-11-20 17:51:04.335523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.026 [2024-11-20 17:51:04.335612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:41.026 [2024-11-20 17:51:04.335655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:20:41.026 [2024-11-20 17:51:04.335676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.026 [2024-11-20 17:51:04.342740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.026 [2024-11-20 17:51:04.342826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:41.026 [2024-11-20 17:51:04.342878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.027 ms 00:20:41.026 [2024-11-20 17:51:04.342900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.026 [2024-11-20 17:51:04.365909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.026 [2024-11-20 17:51:04.366012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:41.026 [2024-11-20 17:51:04.366057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.946 ms 00:20:41.026 [2024-11-20 17:51:04.366077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.026 [2024-11-20 17:51:04.380230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.026 [2024-11-20 17:51:04.380333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:41.026 [2024-11-20 17:51:04.380383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.113 ms 00:20:41.026 [2024-11-20 17:51:04.380404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.026 [2024-11-20 17:51:04.380764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.026 [2024-11-20 17:51:04.380832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:41.026 [2024-11-20 17:51:04.380928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:41.026 [2024-11-20 17:51:04.380952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.026 [2024-11-20 17:51:04.405124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.026 [2024-11-20 17:51:04.405245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:41.027 [2024-11-20 17:51:04.405296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.851 ms 00:20:41.027 [2024-11-20 17:51:04.405318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.027 [2024-11-20 17:51:04.428142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.027 [2024-11-20 17:51:04.428248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:41.027 [2024-11-20 17:51:04.428295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.744 ms 00:20:41.027 [2024-11-20 17:51:04.428315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.027 [2024-11-20 17:51:04.450435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.027 [2024-11-20 17:51:04.450537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:41.027 [2024-11-20 17:51:04.450582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.070 ms 00:20:41.027 [2024-11-20 17:51:04.450603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.027 [2024-11-20 17:51:04.473358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.027 [2024-11-20 17:51:04.473456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:41.027 [2024-11-20 17:51:04.473501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.679 ms 00:20:41.027 [2024-11-20 17:51:04.473521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.027 [2024-11-20 17:51:04.473559] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:41.027 [2024-11-20 17:51:04.473585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.473616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.473643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.473671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.473755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.473784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.473812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.473839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.473963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.474978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:41.027 [2024-11-20 17:51:04.475476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:41.028 [2024-11-20 17:51:04.475774] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:41.028 [2024-11-20 17:51:04.475781] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e06a0caf-34b4-47ef-af8c-ec0a4fae16c0 00:20:41.028 [2024-11-20 17:51:04.475788] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:41.028 [2024-11-20 17:51:04.475795] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:41.028 [2024-11-20 17:51:04.475802] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:41.028 [2024-11-20 17:51:04.475809] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:41.028 [2024-11-20 17:51:04.475816] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:41.028 [2024-11-20 17:51:04.475823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:41.028 [2024-11-20 17:51:04.475829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:41.028 [2024-11-20 17:51:04.475836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:41.028 [2024-11-20 17:51:04.475842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:41.028 [2024-11-20 17:51:04.475849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.028 [2024-11-20 17:51:04.475858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:41.028 [2024-11-20 17:51:04.475866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:20:41.028 [2024-11-20 17:51:04.475883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.028 [2024-11-20 17:51:04.488242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.028 [2024-11-20 17:51:04.488348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:41.028 [2024-11-20 17:51:04.488361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.328 ms 00:20:41.028 [2024-11-20 17:51:04.488369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.028 [2024-11-20 17:51:04.488720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.028 [2024-11-20 17:51:04.488733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:41.028 [2024-11-20 17:51:04.488742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:20:41.028 [2024-11-20 17:51:04.488749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.028 [2024-11-20 17:51:04.523754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.028 [2024-11-20 17:51:04.523863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:41.028 [2024-11-20 17:51:04.523887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.028 [2024-11-20 17:51:04.523896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.028 [2024-11-20 17:51:04.523961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.028 [2024-11-20 17:51:04.523969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:41.028 [2024-11-20 17:51:04.523977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.028 [2024-11-20 17:51:04.523983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.028 [2024-11-20 17:51:04.524021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.028 [2024-11-20 17:51:04.524030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:41.028 [2024-11-20 17:51:04.524038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.028 [2024-11-20 17:51:04.524045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.028 [2024-11-20 17:51:04.524061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.028 [2024-11-20 17:51:04.524072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:41.028 [2024-11-20 17:51:04.524079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.028 [2024-11-20 17:51:04.524086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.287 [2024-11-20 17:51:04.601183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.287 [2024-11-20 17:51:04.601219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:41.287 [2024-11-20 17:51:04.601229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.287 [2024-11-20 17:51:04.601237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.287 [2024-11-20 17:51:04.664456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.287 [2024-11-20 17:51:04.664492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.288 [2024-11-20 17:51:04.664502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.288 [2024-11-20 17:51:04.664510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.288 [2024-11-20 17:51:04.664555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.288 [2024-11-20 17:51:04.664564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.288 [2024-11-20 17:51:04.664572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.288 [2024-11-20 17:51:04.664579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.288 [2024-11-20 17:51:04.664606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.288 [2024-11-20 17:51:04.664613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.288 [2024-11-20 17:51:04.664625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.288 [2024-11-20 17:51:04.664632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.288 [2024-11-20 17:51:04.664714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.288 [2024-11-20 17:51:04.664723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.288 [2024-11-20 17:51:04.664731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.288 [2024-11-20 17:51:04.664738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.288 [2024-11-20 17:51:04.664768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.288 [2024-11-20 17:51:04.664777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:41.288 [2024-11-20 17:51:04.664788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.288 [2024-11-20 17:51:04.664795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.288 [2024-11-20 17:51:04.664830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.288 [2024-11-20 17:51:04.664839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.288 [2024-11-20 17:51:04.664846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.288 [2024-11-20 17:51:04.664853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.288 [2024-11-20 17:51:04.664929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.288 [2024-11-20 17:51:04.664945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.288 [2024-11-20 17:51:04.664957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.288 [2024-11-20 17:51:04.664964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.288 [2024-11-20 17:51:04.665096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.784 ms, result 0 00:20:42.236 00:20:42.236 00:20:42.236 17:51:05 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77049 00:20:42.236 17:51:05 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:42.236 17:51:05 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77049 00:20:42.236 17:51:05 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77049 ']' 00:20:42.236 17:51:05 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.236 17:51:05 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.236 17:51:05 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.236 17:51:05 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.236 17:51:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:42.236 [2024-11-20 17:51:05.669543] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:42.236 [2024-11-20 17:51:05.669659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77049 ] 00:20:42.503 [2024-11-20 17:51:05.828779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.503 [2024-11-20 17:51:05.923283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.069 17:51:06 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.069 17:51:06 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:43.069 17:51:06 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:43.328 [2024-11-20 17:51:06.708554] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:43.328 [2024-11-20 17:51:06.708609] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:43.590 [2024-11-20 17:51:06.882678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.882855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:43.590 [2024-11-20 17:51:06.882892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:43.590 [2024-11-20 17:51:06.882901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.885506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.885534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.590 [2024-11-20 17:51:06.885545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:20:43.590 [2024-11-20 17:51:06.885553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.885624] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:43.590 [2024-11-20 17:51:06.886472] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:43.590 [2024-11-20 17:51:06.886595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.886646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.590 [2024-11-20 17:51:06.886672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:20:43.590 [2024-11-20 17:51:06.886691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.888027] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:43.590 [2024-11-20 17:51:06.900730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.900861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:43.590 [2024-11-20 17:51:06.900935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.709 ms 00:20:43.590 [2024-11-20 17:51:06.900962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.901446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.901808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:43.590 [2024-11-20 17:51:06.902089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:43.590 [2024-11-20 17:51:06.902183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.910107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.910390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.590 [2024-11-20 17:51:06.910574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.722 ms 00:20:43.590 [2024-11-20 17:51:06.910650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.910987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.911095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.590 [2024-11-20 17:51:06.911162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:20:43.590 [2024-11-20 17:51:06.911245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.911401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.911580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:43.590 [2024-11-20 17:51:06.911653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:43.590 [2024-11-20 17:51:06.911715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.911891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:43.590 [2024-11-20 17:51:06.916840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.916950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.590 [2024-11-20 17:51:06.917001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.990 ms 00:20:43.590 [2024-11-20 17:51:06.917023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.917084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.917106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:43.590 [2024-11-20 17:51:06.917127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:43.590 [2024-11-20 17:51:06.917148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.917180] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:43.590 [2024-11-20 17:51:06.917210] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:43.590 [2024-11-20 17:51:06.917307] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:43.590 [2024-11-20 17:51:06.917346] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:43.590 [2024-11-20 17:51:06.917471] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:43.590 [2024-11-20 17:51:06.917503] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:43.590 [2024-11-20 17:51:06.917539] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:43.590 [2024-11-20 17:51:06.917570] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:43.590 [2024-11-20 17:51:06.917637] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:43.590 [2024-11-20 17:51:06.917670] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:43.590 [2024-11-20 17:51:06.917691] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:43.590 [2024-11-20 17:51:06.917708] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:43.590 [2024-11-20 17:51:06.917731] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:43.590 [2024-11-20 17:51:06.918165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.918214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:43.590 [2024-11-20 17:51:06.918239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:20:43.590 [2024-11-20 17:51:06.918260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.918373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.590 [2024-11-20 17:51:06.918398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:43.590 [2024-11-20 17:51:06.918430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:43.590 [2024-11-20 17:51:06.918495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.590 [2024-11-20 17:51:06.918992] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:43.590 [2024-11-20 17:51:06.919023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:43.590 [2024-11-20 17:51:06.919034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.590 [2024-11-20 17:51:06.919044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.590 [2024-11-20 17:51:06.919052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:43.590 [2024-11-20 17:51:06.919060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:43.590 [2024-11-20 17:51:06.919067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:43.590 [2024-11-20 17:51:06.919079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:43.590 [2024-11-20 17:51:06.919086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:43.590 [2024-11-20 17:51:06.919094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.590 [2024-11-20 17:51:06.919101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:43.591 [2024-11-20 17:51:06.919109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:43.591 [2024-11-20 17:51:06.919115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.591 [2024-11-20 17:51:06.919123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:43.591 [2024-11-20 17:51:06.919130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:43.591 [2024-11-20 17:51:06.919137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.591 [2024-11-20 17:51:06.919144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:43.591 [2024-11-20 17:51:06.919152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:43.591 [2024-11-20 17:51:06.919158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.591 [2024-11-20 17:51:06.919166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:43.591 [2024-11-20 17:51:06.919178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:43.591 [2024-11-20 17:51:06.919186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.591 [2024-11-20 17:51:06.919192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:43.591 [2024-11-20 17:51:06.919201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:43.591 [2024-11-20 17:51:06.919207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.591 [2024-11-20 17:51:06.919215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:43.591 [2024-11-20 17:51:06.919222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:43.591 [2024-11-20 17:51:06.919230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.591 [2024-11-20 17:51:06.919236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:43.591 [2024-11-20 17:51:06.919243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:43.591 [2024-11-20 17:51:06.919249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.591 [2024-11-20 17:51:06.919257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:43.591 [2024-11-20 17:51:06.919263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:43.591 [2024-11-20 17:51:06.919272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.591 [2024-11-20 17:51:06.919278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:43.591 [2024-11-20 17:51:06.919286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:43.591 [2024-11-20 17:51:06.919292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.591 [2024-11-20 17:51:06.919300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:43.591 [2024-11-20 17:51:06.919307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:43.591 [2024-11-20 17:51:06.919316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.591 [2024-11-20 17:51:06.919323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:43.591 [2024-11-20 17:51:06.919331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:43.591 [2024-11-20 17:51:06.919337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.591 [2024-11-20 17:51:06.919344] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:43.591 [2024-11-20 17:51:06.919354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:43.591 [2024-11-20 17:51:06.919362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.591 [2024-11-20 17:51:06.919369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.591 [2024-11-20 17:51:06.919377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:43.591 [2024-11-20 17:51:06.919384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:43.591 [2024-11-20 17:51:06.919391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:43.591 [2024-11-20 17:51:06.919398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:43.591 [2024-11-20 17:51:06.919406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:43.591 [2024-11-20 17:51:06.919412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:43.591 [2024-11-20 17:51:06.919421] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:43.591 [2024-11-20 17:51:06.919431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.591 [2024-11-20 17:51:06.919443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:43.591 [2024-11-20 17:51:06.919450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:43.591 [2024-11-20 17:51:06.919459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:43.591 [2024-11-20 17:51:06.919467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:43.591 [2024-11-20 17:51:06.919475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:43.591 [2024-11-20 17:51:06.919482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:43.591 [2024-11-20 17:51:06.919490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:43.591 [2024-11-20 17:51:06.919497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:43.591 [2024-11-20 17:51:06.919504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:43.591 [2024-11-20 17:51:06.919511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:43.591 [2024-11-20 17:51:06.919520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:43.591 [2024-11-20 17:51:06.919527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:43.591 [2024-11-20 17:51:06.919535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:43.591 [2024-11-20 17:51:06.919542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:43.591 [2024-11-20 17:51:06.919551] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:43.591 [2024-11-20 17:51:06.919559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.591 [2024-11-20 17:51:06.919570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:43.591 [2024-11-20 17:51:06.919577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:43.591 [2024-11-20 17:51:06.919586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:43.591 [2024-11-20 17:51:06.919593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:43.591 [2024-11-20 17:51:06.919602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.591 [2024-11-20 17:51:06.919610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:43.591 [2024-11-20 17:51:06.919619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:20:43.591 [2024-11-20 17:51:06.919626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.591 [2024-11-20 17:51:06.945253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.591 [2024-11-20 17:51:06.945381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.591 [2024-11-20 17:51:06.945399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.568 ms 00:20:43.591 [2024-11-20 17:51:06.945409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.591 [2024-11-20 17:51:06.945529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.591 [2024-11-20 17:51:06.945539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:43.591 [2024-11-20 17:51:06.945549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:43.591 [2024-11-20 17:51:06.945556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.591 [2024-11-20 17:51:06.976815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.591 [2024-11-20 17:51:06.976956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.591 [2024-11-20 17:51:06.976977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.237 ms 00:20:43.591 [2024-11-20 17:51:06.976986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.591 [2024-11-20 17:51:06.977062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.591 [2024-11-20 17:51:06.977072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:43.591 [2024-11-20 17:51:06.977082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:43.591 [2024-11-20 17:51:06.977089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.591 [2024-11-20 17:51:06.977418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.591 [2024-11-20 17:51:06.977433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:43.591 [2024-11-20 17:51:06.977445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:20:43.591 [2024-11-20 17:51:06.977452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.591 [2024-11-20 17:51:06.977579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.591 [2024-11-20 17:51:06.977587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:43.591 [2024-11-20 17:51:06.977597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:43.591 [2024-11-20 17:51:06.977604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.591 [2024-11-20 17:51:06.992382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.591 [2024-11-20 17:51:06.992508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:43.591 [2024-11-20 17:51:06.992527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.754 ms 00:20:43.591 [2024-11-20 17:51:06.992535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.591 [2024-11-20 17:51:07.013336] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:43.591 [2024-11-20 17:51:07.013376] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:43.591 [2024-11-20 17:51:07.013391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.592 [2024-11-20 17:51:07.013400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:43.592 [2024-11-20 17:51:07.013411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.754 ms 00:20:43.592 [2024-11-20 17:51:07.013418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.592 [2024-11-20 17:51:07.037814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.592 [2024-11-20 17:51:07.037858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:43.592 [2024-11-20 17:51:07.037889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.327 ms 00:20:43.592 [2024-11-20 17:51:07.037903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.592 [2024-11-20 17:51:07.050066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.592 [2024-11-20 17:51:07.050105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:43.592 [2024-11-20 17:51:07.050120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.070 ms 00:20:43.592 [2024-11-20 17:51:07.050127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.592 [2024-11-20 17:51:07.062039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.592 [2024-11-20 17:51:07.062077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:43.592 [2024-11-20 17:51:07.062090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.834 ms 00:20:43.592 [2024-11-20 17:51:07.062098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.592 [2024-11-20 17:51:07.062738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.592 [2024-11-20 17:51:07.062759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:43.592 [2024-11-20 17:51:07.062771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:20:43.592 [2024-11-20 17:51:07.062779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.592 [2024-11-20 17:51:07.124493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.592 [2024-11-20 17:51:07.124730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:43.592 [2024-11-20 17:51:07.124759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.685 ms 00:20:43.592 [2024-11-20 17:51:07.124770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.852 [2024-11-20 17:51:07.135856] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:43.852 [2024-11-20 17:51:07.155384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.852 [2024-11-20 17:51:07.155609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:43.852 [2024-11-20 17:51:07.155633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.520 ms 00:20:43.852 [2024-11-20 17:51:07.155645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.852 [2024-11-20 17:51:07.155738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.852 [2024-11-20 17:51:07.155751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:43.852 [2024-11-20 17:51:07.155761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:43.852 [2024-11-20 17:51:07.155772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.852 [2024-11-20 17:51:07.155830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.852 [2024-11-20 17:51:07.155841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:43.852 [2024-11-20 17:51:07.155849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:43.852 [2024-11-20 17:51:07.155862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.852 [2024-11-20 17:51:07.155947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.852 [2024-11-20 17:51:07.155967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:43.852 [2024-11-20 17:51:07.155976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:43.852 [2024-11-20 17:51:07.155986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.852 [2024-11-20 17:51:07.156023] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:43.852 [2024-11-20 17:51:07.156038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.852 [2024-11-20 17:51:07.156046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:43.852 [2024-11-20 17:51:07.156060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:43.852 [2024-11-20 17:51:07.156067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.852 [2024-11-20 17:51:07.182647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.852 [2024-11-20 17:51:07.182840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:43.852 [2024-11-20 17:51:07.182891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.548 ms 00:20:43.852 [2024-11-20 17:51:07.182905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.852 [2024-11-20 17:51:07.183071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.852 [2024-11-20 17:51:07.183085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:43.852 [2024-11-20 17:51:07.183096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:43.852 [2024-11-20 17:51:07.183107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.852 [2024-11-20 17:51:07.184256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:43.852 [2024-11-20 17:51:07.188040] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.210 ms, result 0 00:20:43.852 [2024-11-20 17:51:07.190101] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:43.852 Some configs were skipped because the RPC state that can call them passed over. 00:20:43.852 17:51:07 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:44.109 [2024-11-20 17:51:07.500266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.110 [2024-11-20 17:51:07.500412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:44.110 [2024-11-20 17:51:07.500477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:20:44.110 [2024-11-20 17:51:07.500503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.110 [2024-11-20 17:51:07.500554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.171 ms, result 0 00:20:44.110 true 00:20:44.110 17:51:07 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:44.369 [2024-11-20 17:51:07.698686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.369 [2024-11-20 17:51:07.698732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:44.369 [2024-11-20 17:51:07.698746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 00:20:44.369 [2024-11-20 17:51:07.698753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.369 [2024-11-20 17:51:07.698790] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.201 ms, result 0 00:20:44.369 true 00:20:44.369 17:51:07 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77049 00:20:44.369 17:51:07 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77049 ']' 00:20:44.369 17:51:07 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77049 00:20:44.369 17:51:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:44.369 17:51:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.369 17:51:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77049 00:20:44.369 killing process with pid 77049 00:20:44.369 17:51:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.369 17:51:07 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.369 17:51:07 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77049' 00:20:44.369 17:51:07 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77049 00:20:44.369 17:51:07 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77049 00:20:44.936 [2024-11-20 17:51:08.417188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.936 [2024-11-20 17:51:08.417235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:44.936 [2024-11-20 17:51:08.417247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:44.936 [2024-11-20 17:51:08.417256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.936 [2024-11-20 17:51:08.417280] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:44.936 [2024-11-20 17:51:08.419852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.936 [2024-11-20 17:51:08.419886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:44.936 [2024-11-20 17:51:08.419900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:20:44.936 [2024-11-20 17:51:08.419907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.936 [2024-11-20 17:51:08.420205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.936 [2024-11-20 17:51:08.420219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:44.936 [2024-11-20 17:51:08.420230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:20:44.936 [2024-11-20 17:51:08.420237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.936 [2024-11-20 17:51:08.424580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.936 [2024-11-20 17:51:08.424609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:44.936 [2024-11-20 17:51:08.424623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.322 ms 00:20:44.936 [2024-11-20 17:51:08.424630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.936 [2024-11-20 17:51:08.431546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.936 [2024-11-20 17:51:08.431675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:44.936 [2024-11-20 17:51:08.431695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.881 ms 00:20:44.936 [2024-11-20 17:51:08.431702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.936 [2024-11-20 17:51:08.441691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.936 [2024-11-20 17:51:08.441720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:44.936 [2024-11-20 17:51:08.441734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.932 ms 00:20:44.936 [2024-11-20 17:51:08.441747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.936 [2024-11-20 17:51:08.449278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.936 [2024-11-20 17:51:08.449395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:44.936 [2024-11-20 17:51:08.449413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.494 ms 00:20:44.936 [2024-11-20 17:51:08.449420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.936 [2024-11-20 17:51:08.449557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.936 [2024-11-20 17:51:08.449568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:44.936 [2024-11-20 17:51:08.449577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:44.936 [2024-11-20 17:51:08.449584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.936 [2024-11-20 17:51:08.459840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.936 [2024-11-20 17:51:08.459882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:44.936 [2024-11-20 17:51:08.459894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.234 ms 00:20:44.936 [2024-11-20 17:51:08.459901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.936 [2024-11-20 17:51:08.469985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.936 [2024-11-20 17:51:08.470013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:44.936 [2024-11-20 17:51:08.470026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.047 ms 00:20:44.936 [2024-11-20 17:51:08.470033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.197 [2024-11-20 17:51:08.479700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.197 [2024-11-20 17:51:08.479808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:45.197 [2024-11-20 17:51:08.479827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.632 ms 00:20:45.197 [2024-11-20 17:51:08.479834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.197 [2024-11-20 17:51:08.489919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.197 [2024-11-20 17:51:08.490023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:45.197 [2024-11-20 17:51:08.490039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.010 ms 00:20:45.197 [2024-11-20 17:51:08.490046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.197 [2024-11-20 17:51:08.490076] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:45.197 [2024-11-20 17:51:08.490088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:45.197 [2024-11-20 17:51:08.490272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:45.198 [2024-11-20 17:51:08.490858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:45.199 [2024-11-20 17:51:08.490866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:45.199 [2024-11-20 17:51:08.490891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:45.199 [2024-11-20 17:51:08.490898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:45.199 [2024-11-20 17:51:08.490907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:45.199 [2024-11-20 17:51:08.490914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:45.199 [2024-11-20 17:51:08.490925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:45.199 [2024-11-20 17:51:08.490940] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:45.199 [2024-11-20 17:51:08.490952] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e06a0caf-34b4-47ef-af8c-ec0a4fae16c0 00:20:45.199 [2024-11-20 17:51:08.490965] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:45.199 [2024-11-20 17:51:08.490975] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:45.199 [2024-11-20 17:51:08.490982] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:45.199 [2024-11-20 17:51:08.490991] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:45.199 [2024-11-20 17:51:08.490997] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:45.199 [2024-11-20 17:51:08.491006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:45.199 [2024-11-20 17:51:08.491013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:45.199 [2024-11-20 17:51:08.491021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:45.199 [2024-11-20 17:51:08.491027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:45.199 [2024-11-20 17:51:08.491036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.199 [2024-11-20 17:51:08.491043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:45.199 [2024-11-20 17:51:08.491052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:20:45.199 [2024-11-20 17:51:08.491059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.503311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.199 [2024-11-20 17:51:08.503338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:45.199 [2024-11-20 17:51:08.503352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.229 ms 00:20:45.199 [2024-11-20 17:51:08.503360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.503719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.199 [2024-11-20 17:51:08.503733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:45.199 [2024-11-20 17:51:08.503743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:20:45.199 [2024-11-20 17:51:08.503752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.547352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.547381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.199 [2024-11-20 17:51:08.547393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.547400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.547486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.547495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.199 [2024-11-20 17:51:08.547504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.547514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.547553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.547562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.199 [2024-11-20 17:51:08.547572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.547579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.547597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.547604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.199 [2024-11-20 17:51:08.547612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.547619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.623337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.623376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.199 [2024-11-20 17:51:08.623389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.623396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.686668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.686714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.199 [2024-11-20 17:51:08.686727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.686738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.686817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.686826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.199 [2024-11-20 17:51:08.686839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.686847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.686905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.686915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.199 [2024-11-20 17:51:08.686925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.686932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.687033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.687043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.199 [2024-11-20 17:51:08.687053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.687060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.687092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.687101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:45.199 [2024-11-20 17:51:08.687110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.687117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.687159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.687167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.199 [2024-11-20 17:51:08.687178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.687185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.687232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.199 [2024-11-20 17:51:08.687242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.199 [2024-11-20 17:51:08.687251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.199 [2024-11-20 17:51:08.687259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.199 [2024-11-20 17:51:08.687397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.183 ms, result 0 00:20:45.770 17:51:09 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:46.031 [2024-11-20 17:51:09.351736] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:20:46.032 [2024-11-20 17:51:09.351855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77102 ] 00:20:46.032 [2024-11-20 17:51:09.504915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.293 [2024-11-20 17:51:09.587229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.293 [2024-11-20 17:51:09.795462] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:46.293 [2024-11-20 17:51:09.795512] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:46.555 [2024-11-20 17:51:09.943265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.555 [2024-11-20 17:51:09.943299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:46.555 [2024-11-20 17:51:09.943309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:46.555 [2024-11-20 17:51:09.943319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.555 [2024-11-20 17:51:09.945354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.555 [2024-11-20 17:51:09.945382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.555 [2024-11-20 17:51:09.945389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.023 ms 00:20:46.555 [2024-11-20 17:51:09.945395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.555 [2024-11-20 17:51:09.945449] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:46.555 [2024-11-20 17:51:09.945954] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:46.555 [2024-11-20 17:51:09.946013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.555 [2024-11-20 17:51:09.946020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.555 [2024-11-20 17:51:09.946027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:20:46.555 [2024-11-20 17:51:09.946032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.555 [2024-11-20 17:51:09.947159] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:46.555 [2024-11-20 17:51:09.956529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.555 [2024-11-20 17:51:09.956559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:46.555 [2024-11-20 17:51:09.956568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.372 ms 00:20:46.555 [2024-11-20 17:51:09.956574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.555 [2024-11-20 17:51:09.956634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.555 [2024-11-20 17:51:09.956644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:46.555 [2024-11-20 17:51:09.956650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:46.555 [2024-11-20 17:51:09.956656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.555 [2024-11-20 17:51:09.960935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.555 [2024-11-20 17:51:09.961046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.555 [2024-11-20 17:51:09.961059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.251 ms 00:20:46.555 [2024-11-20 17:51:09.961065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.555 [2024-11-20 17:51:09.961139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.555 [2024-11-20 17:51:09.961147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.555 [2024-11-20 17:51:09.961154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:46.555 [2024-11-20 17:51:09.961159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.555 [2024-11-20 17:51:09.961175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.555 [2024-11-20 17:51:09.961183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:46.555 [2024-11-20 17:51:09.961189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:46.555 [2024-11-20 17:51:09.961195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.555 [2024-11-20 17:51:09.961212] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:46.555 [2024-11-20 17:51:09.963950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.555 [2024-11-20 17:51:09.963973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.555 [2024-11-20 17:51:09.963980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.742 ms 00:20:46.555 [2024-11-20 17:51:09.963986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.555 [2024-11-20 17:51:09.964012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.555 [2024-11-20 17:51:09.964018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:46.555 [2024-11-20 17:51:09.964024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:46.555 [2024-11-20 17:51:09.964031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.555 [2024-11-20 17:51:09.964044] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:46.555 [2024-11-20 17:51:09.964059] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:46.555 [2024-11-20 17:51:09.964085] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:46.555 [2024-11-20 17:51:09.964096] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:46.555 [2024-11-20 17:51:09.964174] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:46.555 [2024-11-20 17:51:09.964182] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:46.555 [2024-11-20 17:51:09.964190] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:46.555 [2024-11-20 17:51:09.964198] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:46.555 [2024-11-20 17:51:09.964206] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:46.555 [2024-11-20 17:51:09.964212] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:46.555 [2024-11-20 17:51:09.964218] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:46.555 [2024-11-20 17:51:09.964224] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:46.555 [2024-11-20 17:51:09.964230] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:46.555 [2024-11-20 17:51:09.964235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:09.964241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:46.556 [2024-11-20 17:51:09.964247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:46.556 [2024-11-20 17:51:09.964252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:09.964318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:09.964326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:46.556 [2024-11-20 17:51:09.964332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:46.556 [2024-11-20 17:51:09.964337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:09.964412] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:46.556 [2024-11-20 17:51:09.964419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:46.556 [2024-11-20 17:51:09.964425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.556 [2024-11-20 17:51:09.964430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:46.556 [2024-11-20 17:51:09.964441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:46.556 [2024-11-20 17:51:09.964452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:46.556 [2024-11-20 17:51:09.964457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.556 [2024-11-20 17:51:09.964467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:46.556 [2024-11-20 17:51:09.964472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:46.556 [2024-11-20 17:51:09.964477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.556 [2024-11-20 17:51:09.964487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:46.556 [2024-11-20 17:51:09.964492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:46.556 [2024-11-20 17:51:09.964496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:46.556 [2024-11-20 17:51:09.964508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:46.556 [2024-11-20 17:51:09.964513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:46.556 [2024-11-20 17:51:09.964523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.556 [2024-11-20 17:51:09.964533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:46.556 [2024-11-20 17:51:09.964538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.556 [2024-11-20 17:51:09.964547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:46.556 [2024-11-20 17:51:09.964552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.556 [2024-11-20 17:51:09.964562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:46.556 [2024-11-20 17:51:09.964567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.556 [2024-11-20 17:51:09.964577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:46.556 [2024-11-20 17:51:09.964582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.556 [2024-11-20 17:51:09.964591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:46.556 [2024-11-20 17:51:09.964596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:46.556 [2024-11-20 17:51:09.964601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.556 [2024-11-20 17:51:09.964606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:46.556 [2024-11-20 17:51:09.964611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:46.556 [2024-11-20 17:51:09.964616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:46.556 [2024-11-20 17:51:09.964626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:46.556 [2024-11-20 17:51:09.964631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964636] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:46.556 [2024-11-20 17:51:09.964642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:46.556 [2024-11-20 17:51:09.964648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.556 [2024-11-20 17:51:09.964655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.556 [2024-11-20 17:51:09.964660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:46.556 [2024-11-20 17:51:09.964665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:46.556 [2024-11-20 17:51:09.964671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:46.556 [2024-11-20 17:51:09.964676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:46.556 [2024-11-20 17:51:09.964681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:46.556 [2024-11-20 17:51:09.964686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:46.556 [2024-11-20 17:51:09.964692] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:46.556 [2024-11-20 17:51:09.964699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.556 [2024-11-20 17:51:09.964705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:46.556 [2024-11-20 17:51:09.964710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:46.556 [2024-11-20 17:51:09.964715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:46.556 [2024-11-20 17:51:09.964721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:46.556 [2024-11-20 17:51:09.964727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:46.556 [2024-11-20 17:51:09.964732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:46.556 [2024-11-20 17:51:09.964737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:46.556 [2024-11-20 17:51:09.964742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:46.556 [2024-11-20 17:51:09.964748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:46.556 [2024-11-20 17:51:09.964753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:46.556 [2024-11-20 17:51:09.964758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:46.556 [2024-11-20 17:51:09.964764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:46.556 [2024-11-20 17:51:09.964769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:46.556 [2024-11-20 17:51:09.964774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:46.556 [2024-11-20 17:51:09.964779] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:46.556 [2024-11-20 17:51:09.964785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.556 [2024-11-20 17:51:09.964791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:46.556 [2024-11-20 17:51:09.964797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:46.556 [2024-11-20 17:51:09.964802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:46.556 [2024-11-20 17:51:09.964808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:46.556 [2024-11-20 17:51:09.964813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:09.964818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:46.556 [2024-11-20 17:51:09.964826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:20:46.556 [2024-11-20 17:51:09.964831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:09.985715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:09.985743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.556 [2024-11-20 17:51:09.985751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.846 ms 00:20:46.556 [2024-11-20 17:51:09.985757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:09.985849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:09.985860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:46.556 [2024-11-20 17:51:09.985866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:46.556 [2024-11-20 17:51:09.985886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:10.027700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:10.027733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.556 [2024-11-20 17:51:10.027743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.797 ms 00:20:46.556 [2024-11-20 17:51:10.027753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:10.027817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:10.027828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.556 [2024-11-20 17:51:10.027835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:46.556 [2024-11-20 17:51:10.027842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:10.028137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:10.028154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.556 [2024-11-20 17:51:10.028162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:46.556 [2024-11-20 17:51:10.028169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:10.028288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:10.028296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.556 [2024-11-20 17:51:10.028304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:46.556 [2024-11-20 17:51:10.028310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:10.039385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:10.039410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.556 [2024-11-20 17:51:10.039418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.057 ms 00:20:46.556 [2024-11-20 17:51:10.039425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:10.049521] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:46.556 [2024-11-20 17:51:10.049545] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:46.556 [2024-11-20 17:51:10.049554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:10.049561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:46.556 [2024-11-20 17:51:10.049568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.042 ms 00:20:46.556 [2024-11-20 17:51:10.049575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:10.068748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:10.068782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:46.556 [2024-11-20 17:51:10.068792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.115 ms 00:20:46.556 [2024-11-20 17:51:10.068799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:10.077781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:10.077807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:46.556 [2024-11-20 17:51:10.077815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.938 ms 00:20:46.556 [2024-11-20 17:51:10.077821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:10.086687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:10.086791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:46.556 [2024-11-20 17:51:10.086803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.826 ms 00:20:46.556 [2024-11-20 17:51:10.086809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.556 [2024-11-20 17:51:10.087292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.556 [2024-11-20 17:51:10.087309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:46.556 [2024-11-20 17:51:10.087316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:20:46.556 [2024-11-20 17:51:10.087322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.818 [2024-11-20 17:51:10.130710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.818 [2024-11-20 17:51:10.130751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:46.818 [2024-11-20 17:51:10.130761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.370 ms 00:20:46.818 [2024-11-20 17:51:10.130767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.818 [2024-11-20 17:51:10.138363] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:46.818 [2024-11-20 17:51:10.149581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.818 [2024-11-20 17:51:10.149695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:46.818 [2024-11-20 17:51:10.149709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.749 ms 00:20:46.818 [2024-11-20 17:51:10.149719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.818 [2024-11-20 17:51:10.149791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.818 [2024-11-20 17:51:10.149799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:46.818 [2024-11-20 17:51:10.149806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:46.818 [2024-11-20 17:51:10.149812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.818 [2024-11-20 17:51:10.149847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.818 [2024-11-20 17:51:10.149854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:46.818 [2024-11-20 17:51:10.149861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:46.818 [2024-11-20 17:51:10.149867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.818 [2024-11-20 17:51:10.149908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.818 [2024-11-20 17:51:10.149915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:46.818 [2024-11-20 17:51:10.149921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:46.818 [2024-11-20 17:51:10.149927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.818 [2024-11-20 17:51:10.149950] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:46.818 [2024-11-20 17:51:10.149957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.818 [2024-11-20 17:51:10.149963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:46.818 [2024-11-20 17:51:10.149968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:46.818 [2024-11-20 17:51:10.149974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.818 [2024-11-20 17:51:10.167654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.818 [2024-11-20 17:51:10.167681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:46.818 [2024-11-20 17:51:10.167689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.665 ms 00:20:46.818 [2024-11-20 17:51:10.167695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.818 [2024-11-20 17:51:10.167765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.818 [2024-11-20 17:51:10.167773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:46.818 [2024-11-20 17:51:10.167780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:46.818 [2024-11-20 17:51:10.167785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.818 [2024-11-20 17:51:10.168422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:46.818 [2024-11-20 17:51:10.170802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.942 ms, result 0 00:20:46.818 [2024-11-20 17:51:10.171417] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.818 [2024-11-20 17:51:10.186110] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:47.760  [2024-11-20T17:51:12.241Z] Copying: 25/256 [MB] (25 MBps) [2024-11-20T17:51:13.628Z] Copying: 38/256 [MB] (13 MBps) [2024-11-20T17:51:14.573Z] Copying: 51/256 [MB] (12 MBps) [2024-11-20T17:51:15.509Z] Copying: 63/256 [MB] (12 MBps) [2024-11-20T17:51:16.441Z] Copying: 80/256 [MB] (16 MBps) [2024-11-20T17:51:17.379Z] Copying: 99/256 [MB] (19 MBps) [2024-11-20T17:51:18.313Z] Copying: 114/256 [MB] (15 MBps) [2024-11-20T17:51:19.248Z] Copying: 127/256 [MB] (12 MBps) [2024-11-20T17:51:20.625Z] Copying: 146/256 [MB] (19 MBps) [2024-11-20T17:51:21.562Z] Copying: 167/256 [MB] (20 MBps) [2024-11-20T17:51:22.496Z] Copying: 186/256 [MB] (19 MBps) [2024-11-20T17:51:23.430Z] Copying: 200/256 [MB] (13 MBps) [2024-11-20T17:51:24.370Z] Copying: 223/256 [MB] (22 MBps) [2024-11-20T17:51:25.314Z] Copying: 236/256 [MB] (12 MBps) [2024-11-20T17:51:26.257Z] Copying: 247/256 [MB] (11 MBps) [2024-11-20T17:51:26.520Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-20 17:51:26.279673] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:02.980 [2024-11-20 17:51:26.291135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.980 [2024-11-20 17:51:26.291192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:02.980 [2024-11-20 17:51:26.291208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:02.980 [2024-11-20 17:51:26.291228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.980 [2024-11-20 17:51:26.291257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:02.980 [2024-11-20 17:51:26.294373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.980 [2024-11-20 17:51:26.294430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:02.980 [2024-11-20 17:51:26.294443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.101 ms 00:21:02.980 [2024-11-20 17:51:26.294453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.980 [2024-11-20 17:51:26.294755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.980 [2024-11-20 17:51:26.294776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:02.980 [2024-11-20 17:51:26.294787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:21:02.980 [2024-11-20 17:51:26.294796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.980 [2024-11-20 17:51:26.298498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.981 [2024-11-20 17:51:26.298529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:02.981 [2024-11-20 17:51:26.298539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:21:02.981 [2024-11-20 17:51:26.298547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.981 [2024-11-20 17:51:26.305505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.981 [2024-11-20 17:51:26.305551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:02.981 [2024-11-20 17:51:26.305563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.938 ms 00:21:02.981 [2024-11-20 17:51:26.305572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.981 [2024-11-20 17:51:26.331752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.981 [2024-11-20 17:51:26.331804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:02.981 [2024-11-20 17:51:26.331818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.102 ms 00:21:02.981 [2024-11-20 17:51:26.331826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.981 [2024-11-20 17:51:26.348258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.981 [2024-11-20 17:51:26.348331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:02.981 [2024-11-20 17:51:26.348350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.355 ms 00:21:02.981 [2024-11-20 17:51:26.348358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.981 [2024-11-20 17:51:26.348501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.981 [2024-11-20 17:51:26.348512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:02.981 [2024-11-20 17:51:26.348522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:02.981 [2024-11-20 17:51:26.348531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.981 [2024-11-20 17:51:26.375307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.981 [2024-11-20 17:51:26.375358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:02.981 [2024-11-20 17:51:26.375371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.749 ms 00:21:02.981 [2024-11-20 17:51:26.375378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.981 [2024-11-20 17:51:26.401592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.981 [2024-11-20 17:51:26.401638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:02.981 [2024-11-20 17:51:26.401651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.144 ms 00:21:02.981 [2024-11-20 17:51:26.401659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.981 [2024-11-20 17:51:26.427327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.981 [2024-11-20 17:51:26.427376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:02.981 [2024-11-20 17:51:26.427390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.600 ms 00:21:02.981 [2024-11-20 17:51:26.427397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.981 [2024-11-20 17:51:26.452439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.981 [2024-11-20 17:51:26.452502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:02.981 [2024-11-20 17:51:26.452515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.940 ms 00:21:02.981 [2024-11-20 17:51:26.452523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.981 [2024-11-20 17:51:26.452576] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:02.981 [2024-11-20 17:51:26.452592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:02.981 [2024-11-20 17:51:26.452991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.452999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:02.982 [2024-11-20 17:51:26.453426] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:02.982 [2024-11-20 17:51:26.453435] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e06a0caf-34b4-47ef-af8c-ec0a4fae16c0 00:21:02.982 [2024-11-20 17:51:26.453443] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:02.982 [2024-11-20 17:51:26.453452] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:02.982 [2024-11-20 17:51:26.453459] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:02.982 [2024-11-20 17:51:26.453468] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:02.982 [2024-11-20 17:51:26.453476] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:02.982 [2024-11-20 17:51:26.453485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:02.982 [2024-11-20 17:51:26.453493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:02.982 [2024-11-20 17:51:26.453500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:02.982 [2024-11-20 17:51:26.453506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:02.982 [2024-11-20 17:51:26.453514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.982 [2024-11-20 17:51:26.453526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:02.982 [2024-11-20 17:51:26.453536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:21:02.982 [2024-11-20 17:51:26.453544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.982 [2024-11-20 17:51:26.467438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.982 [2024-11-20 17:51:26.467489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:02.982 [2024-11-20 17:51:26.467501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.856 ms 00:21:02.982 [2024-11-20 17:51:26.467509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.983 [2024-11-20 17:51:26.467942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.983 [2024-11-20 17:51:26.467964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:02.983 [2024-11-20 17:51:26.467976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:21:02.983 [2024-11-20 17:51:26.467983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.983 [2024-11-20 17:51:26.507237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.983 [2024-11-20 17:51:26.507291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.983 [2024-11-20 17:51:26.507304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.983 [2024-11-20 17:51:26.507313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.983 [2024-11-20 17:51:26.507423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.983 [2024-11-20 17:51:26.507433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.983 [2024-11-20 17:51:26.507442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.983 [2024-11-20 17:51:26.507450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.983 [2024-11-20 17:51:26.507504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.983 [2024-11-20 17:51:26.507515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.983 [2024-11-20 17:51:26.507523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.983 [2024-11-20 17:51:26.507530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.983 [2024-11-20 17:51:26.507549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.983 [2024-11-20 17:51:26.507560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.983 [2024-11-20 17:51:26.507568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.983 [2024-11-20 17:51:26.507575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.243 [2024-11-20 17:51:26.592767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.243 [2024-11-20 17:51:26.592828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:03.243 [2024-11-20 17:51:26.592842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.243 [2024-11-20 17:51:26.592851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.243 [2024-11-20 17:51:26.662258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.243 [2024-11-20 17:51:26.662317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:03.243 [2024-11-20 17:51:26.662330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.243 [2024-11-20 17:51:26.662340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.243 [2024-11-20 17:51:26.662457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.243 [2024-11-20 17:51:26.662469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:03.244 [2024-11-20 17:51:26.662479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.244 [2024-11-20 17:51:26.662488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.244 [2024-11-20 17:51:26.662521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.244 [2024-11-20 17:51:26.662531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:03.244 [2024-11-20 17:51:26.662547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.244 [2024-11-20 17:51:26.662556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.244 [2024-11-20 17:51:26.662660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.244 [2024-11-20 17:51:26.662671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:03.244 [2024-11-20 17:51:26.662680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.244 [2024-11-20 17:51:26.662688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.244 [2024-11-20 17:51:26.662723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.244 [2024-11-20 17:51:26.662735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:03.244 [2024-11-20 17:51:26.662744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.244 [2024-11-20 17:51:26.662757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.244 [2024-11-20 17:51:26.662801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.244 [2024-11-20 17:51:26.662811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.244 [2024-11-20 17:51:26.662820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.244 [2024-11-20 17:51:26.662828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.244 [2024-11-20 17:51:26.662894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.244 [2024-11-20 17:51:26.662924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.244 [2024-11-20 17:51:26.662937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.244 [2024-11-20 17:51:26.662946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.244 [2024-11-20 17:51:26.663107] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.973 ms, result 0 00:21:04.179 00:21:04.179 00:21:04.179 17:51:27 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:04.440 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:04.440 17:51:27 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:04.440 17:51:27 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:04.440 17:51:27 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:04.440 17:51:27 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:04.440 17:51:27 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:04.700 17:51:27 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:04.700 17:51:28 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77049 00:21:04.700 17:51:28 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77049 ']' 00:21:04.700 17:51:28 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77049 00:21:04.700 Process with pid 77049 is not found 00:21:04.700 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77049) - No such process 00:21:04.700 17:51:28 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77049 is not found' 00:21:04.700 00:21:04.700 real 1m10.643s 00:21:04.700 user 1m37.505s 00:21:04.700 sys 0m5.222s 00:21:04.700 17:51:28 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.700 ************************************ 00:21:04.700 END TEST ftl_trim 00:21:04.700 ************************************ 00:21:04.700 17:51:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:04.700 17:51:28 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:04.700 17:51:28 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:04.700 17:51:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.700 17:51:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:04.700 ************************************ 00:21:04.700 START TEST ftl_restore 00:21:04.700 ************************************ 00:21:04.700 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:04.700 * Looking for test storage... 00:21:04.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:04.700 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:04.700 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:04.700 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:21:04.700 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.700 17:51:28 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.959 17:51:28 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:04.959 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.959 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:04.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.959 --rc genhtml_branch_coverage=1 00:21:04.959 --rc genhtml_function_coverage=1 00:21:04.959 --rc genhtml_legend=1 00:21:04.959 --rc geninfo_all_blocks=1 00:21:04.959 --rc geninfo_unexecuted_blocks=1 00:21:04.959 00:21:04.959 ' 00:21:04.959 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:04.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.959 --rc genhtml_branch_coverage=1 00:21:04.959 --rc genhtml_function_coverage=1 00:21:04.959 --rc genhtml_legend=1 00:21:04.959 --rc geninfo_all_blocks=1 00:21:04.959 --rc geninfo_unexecuted_blocks=1 00:21:04.959 00:21:04.959 ' 00:21:04.959 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:04.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.959 --rc genhtml_branch_coverage=1 00:21:04.959 --rc genhtml_function_coverage=1 00:21:04.959 --rc genhtml_legend=1 00:21:04.959 --rc geninfo_all_blocks=1 00:21:04.959 --rc geninfo_unexecuted_blocks=1 00:21:04.959 00:21:04.959 ' 00:21:04.959 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:04.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.959 --rc genhtml_branch_coverage=1 00:21:04.959 --rc genhtml_function_coverage=1 00:21:04.959 --rc genhtml_legend=1 00:21:04.959 --rc geninfo_all_blocks=1 00:21:04.959 --rc geninfo_unexecuted_blocks=1 00:21:04.959 00:21:04.959 ' 00:21:04.959 17:51:28 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:04.959 17:51:28 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:04.959 17:51:28 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:04.959 17:51:28 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:04.959 17:51:28 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:04.959 17:51:28 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:04.959 17:51:28 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:04.959 17:51:28 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:04.959 17:51:28 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:04.959 17:51:28 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Xlk8okiNyB 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77360 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77360 00:21:04.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.960 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77360 ']' 00:21:04.960 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.960 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.960 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.960 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.960 17:51:28 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:04.960 17:51:28 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.960 [2024-11-20 17:51:28.350646] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:21:04.960 [2024-11-20 17:51:28.350765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77360 ] 00:21:05.218 [2024-11-20 17:51:28.510651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.218 [2024-11-20 17:51:28.605528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.784 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.784 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:05.784 17:51:29 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:05.785 17:51:29 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:05.785 17:51:29 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:05.785 17:51:29 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:05.785 17:51:29 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:05.785 17:51:29 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:06.042 17:51:29 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:06.042 17:51:29 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:06.042 17:51:29 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:06.042 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:06.042 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:06.042 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:06.042 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:06.042 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:06.300 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:06.300 { 00:21:06.300 "name": "nvme0n1", 00:21:06.300 "aliases": [ 00:21:06.300 "6bcbe587-7cf3-402b-a5e0-952d0d021eec" 00:21:06.300 ], 00:21:06.300 "product_name": "NVMe disk", 00:21:06.300 "block_size": 4096, 00:21:06.300 "num_blocks": 1310720, 00:21:06.300 "uuid": "6bcbe587-7cf3-402b-a5e0-952d0d021eec", 00:21:06.300 "numa_id": -1, 00:21:06.300 "assigned_rate_limits": { 00:21:06.300 "rw_ios_per_sec": 0, 00:21:06.300 "rw_mbytes_per_sec": 0, 00:21:06.300 "r_mbytes_per_sec": 0, 00:21:06.300 "w_mbytes_per_sec": 0 00:21:06.300 }, 00:21:06.300 "claimed": true, 00:21:06.300 "claim_type": "read_many_write_one", 00:21:06.300 "zoned": false, 00:21:06.300 "supported_io_types": { 00:21:06.300 "read": true, 00:21:06.300 "write": true, 00:21:06.300 "unmap": true, 00:21:06.300 "flush": true, 00:21:06.300 "reset": true, 00:21:06.300 "nvme_admin": true, 00:21:06.300 "nvme_io": true, 00:21:06.300 "nvme_io_md": false, 00:21:06.300 "write_zeroes": true, 00:21:06.300 "zcopy": false, 00:21:06.300 "get_zone_info": false, 00:21:06.300 "zone_management": false, 00:21:06.300 "zone_append": false, 00:21:06.300 "compare": true, 00:21:06.300 "compare_and_write": false, 00:21:06.300 "abort": true, 00:21:06.300 "seek_hole": false, 00:21:06.300 "seek_data": false, 00:21:06.300 "copy": true, 00:21:06.300 "nvme_iov_md": false 00:21:06.300 }, 00:21:06.300 "driver_specific": { 00:21:06.300 "nvme": [ 00:21:06.300 { 00:21:06.300 "pci_address": "0000:00:11.0", 00:21:06.300 "trid": { 00:21:06.300 "trtype": "PCIe", 00:21:06.301 "traddr": "0000:00:11.0" 00:21:06.301 }, 00:21:06.301 "ctrlr_data": { 00:21:06.301 "cntlid": 0, 00:21:06.301 "vendor_id": "0x1b36", 00:21:06.301 "model_number": "QEMU NVMe Ctrl", 00:21:06.301 "serial_number": "12341", 00:21:06.301 "firmware_revision": "8.0.0", 00:21:06.301 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:06.301 "oacs": { 00:21:06.301 "security": 0, 00:21:06.301 "format": 1, 00:21:06.301 "firmware": 0, 00:21:06.301 "ns_manage": 1 00:21:06.301 }, 00:21:06.301 "multi_ctrlr": false, 00:21:06.301 "ana_reporting": false 00:21:06.301 }, 00:21:06.301 "vs": { 00:21:06.301 "nvme_version": "1.4" 00:21:06.301 }, 00:21:06.301 "ns_data": { 00:21:06.301 "id": 1, 00:21:06.301 "can_share": false 00:21:06.301 } 00:21:06.301 } 00:21:06.301 ], 00:21:06.301 "mp_policy": "active_passive" 00:21:06.301 } 00:21:06.301 } 00:21:06.301 ]' 00:21:06.301 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:06.301 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:06.301 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:06.301 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:06.301 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:06.301 17:51:29 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:06.301 17:51:29 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:06.301 17:51:29 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:06.301 17:51:29 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:06.301 17:51:29 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:06.301 17:51:29 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:06.558 17:51:29 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=5629153b-9207-409d-b3eb-e66884e6e2fe 00:21:06.558 17:51:29 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:06.558 17:51:29 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5629153b-9207-409d-b3eb-e66884e6e2fe 00:21:06.817 17:51:30 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:07.075 17:51:30 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b5620548-bf19-40fc-a0ae-a2bfbab7082f 00:21:07.075 17:51:30 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b5620548-bf19-40fc-a0ae-a2bfbab7082f 00:21:07.075 17:51:30 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:07.075 17:51:30 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:07.075 17:51:30 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:07.075 17:51:30 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:07.075 17:51:30 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:07.075 17:51:30 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:07.075 17:51:30 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:07.075 17:51:30 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:07.075 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:07.075 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:07.075 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:07.075 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:07.075 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:07.334 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:07.334 { 00:21:07.334 "name": "7680bf2d-7a38-40cb-94d6-8fb3c1a283cc", 00:21:07.334 "aliases": [ 00:21:07.334 "lvs/nvme0n1p0" 00:21:07.334 ], 00:21:07.334 "product_name": "Logical Volume", 00:21:07.334 "block_size": 4096, 00:21:07.334 "num_blocks": 26476544, 00:21:07.334 "uuid": "7680bf2d-7a38-40cb-94d6-8fb3c1a283cc", 00:21:07.334 "assigned_rate_limits": { 00:21:07.334 "rw_ios_per_sec": 0, 00:21:07.334 "rw_mbytes_per_sec": 0, 00:21:07.334 "r_mbytes_per_sec": 0, 00:21:07.334 "w_mbytes_per_sec": 0 00:21:07.334 }, 00:21:07.334 "claimed": false, 00:21:07.334 "zoned": false, 00:21:07.334 "supported_io_types": { 00:21:07.334 "read": true, 00:21:07.334 "write": true, 00:21:07.334 "unmap": true, 00:21:07.334 "flush": false, 00:21:07.334 "reset": true, 00:21:07.334 "nvme_admin": false, 00:21:07.334 "nvme_io": false, 00:21:07.334 "nvme_io_md": false, 00:21:07.334 "write_zeroes": true, 00:21:07.334 "zcopy": false, 00:21:07.334 "get_zone_info": false, 00:21:07.334 "zone_management": false, 00:21:07.334 "zone_append": false, 00:21:07.334 "compare": false, 00:21:07.334 "compare_and_write": false, 00:21:07.334 "abort": false, 00:21:07.334 "seek_hole": true, 00:21:07.334 "seek_data": true, 00:21:07.334 "copy": false, 00:21:07.334 "nvme_iov_md": false 00:21:07.334 }, 00:21:07.334 "driver_specific": { 00:21:07.334 "lvol": { 00:21:07.334 "lvol_store_uuid": "b5620548-bf19-40fc-a0ae-a2bfbab7082f", 00:21:07.334 "base_bdev": "nvme0n1", 00:21:07.334 "thin_provision": true, 00:21:07.334 "num_allocated_clusters": 0, 00:21:07.334 "snapshot": false, 00:21:07.334 "clone": false, 00:21:07.334 "esnap_clone": false 00:21:07.334 } 00:21:07.334 } 00:21:07.334 } 00:21:07.334 ]' 00:21:07.334 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:07.334 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:07.334 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:07.335 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:07.335 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:07.335 17:51:30 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:07.335 17:51:30 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:07.335 17:51:30 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:07.335 17:51:30 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:07.593 17:51:31 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:07.593 17:51:31 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:07.593 17:51:31 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:07.593 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:07.593 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:07.593 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:07.593 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:07.593 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:07.852 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:07.852 { 00:21:07.852 "name": "7680bf2d-7a38-40cb-94d6-8fb3c1a283cc", 00:21:07.852 "aliases": [ 00:21:07.852 "lvs/nvme0n1p0" 00:21:07.852 ], 00:21:07.852 "product_name": "Logical Volume", 00:21:07.852 "block_size": 4096, 00:21:07.852 "num_blocks": 26476544, 00:21:07.852 "uuid": "7680bf2d-7a38-40cb-94d6-8fb3c1a283cc", 00:21:07.852 "assigned_rate_limits": { 00:21:07.852 "rw_ios_per_sec": 0, 00:21:07.852 "rw_mbytes_per_sec": 0, 00:21:07.852 "r_mbytes_per_sec": 0, 00:21:07.852 "w_mbytes_per_sec": 0 00:21:07.852 }, 00:21:07.852 "claimed": false, 00:21:07.852 "zoned": false, 00:21:07.852 "supported_io_types": { 00:21:07.852 "read": true, 00:21:07.852 "write": true, 00:21:07.852 "unmap": true, 00:21:07.852 "flush": false, 00:21:07.852 "reset": true, 00:21:07.852 "nvme_admin": false, 00:21:07.852 "nvme_io": false, 00:21:07.852 "nvme_io_md": false, 00:21:07.852 "write_zeroes": true, 00:21:07.852 "zcopy": false, 00:21:07.852 "get_zone_info": false, 00:21:07.852 "zone_management": false, 00:21:07.852 "zone_append": false, 00:21:07.852 "compare": false, 00:21:07.852 "compare_and_write": false, 00:21:07.852 "abort": false, 00:21:07.852 "seek_hole": true, 00:21:07.852 "seek_data": true, 00:21:07.852 "copy": false, 00:21:07.852 "nvme_iov_md": false 00:21:07.852 }, 00:21:07.852 "driver_specific": { 00:21:07.852 "lvol": { 00:21:07.852 "lvol_store_uuid": "b5620548-bf19-40fc-a0ae-a2bfbab7082f", 00:21:07.852 "base_bdev": "nvme0n1", 00:21:07.852 "thin_provision": true, 00:21:07.852 "num_allocated_clusters": 0, 00:21:07.852 "snapshot": false, 00:21:07.852 "clone": false, 00:21:07.852 "esnap_clone": false 00:21:07.852 } 00:21:07.852 } 00:21:07.852 } 00:21:07.852 ]' 00:21:07.852 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:07.852 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:07.852 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:07.852 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:07.852 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:07.852 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:07.852 17:51:31 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:07.852 17:51:31 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:08.113 17:51:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:08.113 17:51:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:08.113 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:08.113 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:08.113 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:08.113 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:08.113 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7680bf2d-7a38-40cb-94d6-8fb3c1a283cc 00:21:08.374 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:08.374 { 00:21:08.374 "name": "7680bf2d-7a38-40cb-94d6-8fb3c1a283cc", 00:21:08.374 "aliases": [ 00:21:08.374 "lvs/nvme0n1p0" 00:21:08.374 ], 00:21:08.374 "product_name": "Logical Volume", 00:21:08.374 "block_size": 4096, 00:21:08.374 "num_blocks": 26476544, 00:21:08.374 "uuid": "7680bf2d-7a38-40cb-94d6-8fb3c1a283cc", 00:21:08.374 "assigned_rate_limits": { 00:21:08.374 "rw_ios_per_sec": 0, 00:21:08.374 "rw_mbytes_per_sec": 0, 00:21:08.374 "r_mbytes_per_sec": 0, 00:21:08.374 "w_mbytes_per_sec": 0 00:21:08.374 }, 00:21:08.374 "claimed": false, 00:21:08.374 "zoned": false, 00:21:08.374 "supported_io_types": { 00:21:08.374 "read": true, 00:21:08.374 "write": true, 00:21:08.374 "unmap": true, 00:21:08.374 "flush": false, 00:21:08.374 "reset": true, 00:21:08.374 "nvme_admin": false, 00:21:08.374 "nvme_io": false, 00:21:08.374 "nvme_io_md": false, 00:21:08.374 "write_zeroes": true, 00:21:08.374 "zcopy": false, 00:21:08.374 "get_zone_info": false, 00:21:08.374 "zone_management": false, 00:21:08.374 "zone_append": false, 00:21:08.374 "compare": false, 00:21:08.374 "compare_and_write": false, 00:21:08.374 "abort": false, 00:21:08.374 "seek_hole": true, 00:21:08.374 "seek_data": true, 00:21:08.374 "copy": false, 00:21:08.374 "nvme_iov_md": false 00:21:08.374 }, 00:21:08.374 "driver_specific": { 00:21:08.374 "lvol": { 00:21:08.374 "lvol_store_uuid": "b5620548-bf19-40fc-a0ae-a2bfbab7082f", 00:21:08.374 "base_bdev": "nvme0n1", 00:21:08.374 "thin_provision": true, 00:21:08.374 "num_allocated_clusters": 0, 00:21:08.374 "snapshot": false, 00:21:08.374 "clone": false, 00:21:08.375 "esnap_clone": false 00:21:08.375 } 00:21:08.375 } 00:21:08.375 } 00:21:08.375 ]' 00:21:08.375 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:08.375 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:08.375 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:08.375 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:08.375 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:08.375 17:51:31 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:08.375 17:51:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:08.375 17:51:31 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7680bf2d-7a38-40cb-94d6-8fb3c1a283cc --l2p_dram_limit 10' 00:21:08.375 17:51:31 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:08.375 17:51:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:08.375 17:51:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:08.375 17:51:31 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:08.375 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:08.375 17:51:31 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7680bf2d-7a38-40cb-94d6-8fb3c1a283cc --l2p_dram_limit 10 -c nvc0n1p0 00:21:08.636 [2024-11-20 17:51:32.034492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.636 [2024-11-20 17:51:32.034531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:08.636 [2024-11-20 17:51:32.034544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:08.636 [2024-11-20 17:51:32.034550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.636 [2024-11-20 17:51:32.034599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.636 [2024-11-20 17:51:32.034607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:08.636 [2024-11-20 17:51:32.034614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:08.636 [2024-11-20 17:51:32.034621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.636 [2024-11-20 17:51:32.034636] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:08.636 [2024-11-20 17:51:32.035266] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:08.636 [2024-11-20 17:51:32.035292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.636 [2024-11-20 17:51:32.035299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:08.636 [2024-11-20 17:51:32.035308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:21:08.636 [2024-11-20 17:51:32.035313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.636 [2024-11-20 17:51:32.035365] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 582e7997-38e0-40d2-a69e-d470f323bfc0 00:21:08.636 [2024-11-20 17:51:32.036290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.636 [2024-11-20 17:51:32.036319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:08.637 [2024-11-20 17:51:32.036327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:08.637 [2024-11-20 17:51:32.036334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.637 [2024-11-20 17:51:32.040991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.637 [2024-11-20 17:51:32.041022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:08.637 [2024-11-20 17:51:32.041030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.618 ms 00:21:08.637 [2024-11-20 17:51:32.041037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.637 [2024-11-20 17:51:32.041103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.637 [2024-11-20 17:51:32.041111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:08.637 [2024-11-20 17:51:32.041118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:08.637 [2024-11-20 17:51:32.041127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.637 [2024-11-20 17:51:32.041165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.637 [2024-11-20 17:51:32.041174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:08.637 [2024-11-20 17:51:32.041180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:08.637 [2024-11-20 17:51:32.041189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.637 [2024-11-20 17:51:32.041205] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:08.637 [2024-11-20 17:51:32.044050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.637 [2024-11-20 17:51:32.044076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:08.637 [2024-11-20 17:51:32.044086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.848 ms 00:21:08.637 [2024-11-20 17:51:32.044092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.637 [2024-11-20 17:51:32.044116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.637 [2024-11-20 17:51:32.044123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:08.637 [2024-11-20 17:51:32.044130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:08.637 [2024-11-20 17:51:32.044136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.637 [2024-11-20 17:51:32.044155] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:08.637 [2024-11-20 17:51:32.044259] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:08.637 [2024-11-20 17:51:32.044271] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:08.637 [2024-11-20 17:51:32.044280] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:08.637 [2024-11-20 17:51:32.044288] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:08.637 [2024-11-20 17:51:32.044296] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:08.637 [2024-11-20 17:51:32.044303] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:08.637 [2024-11-20 17:51:32.044309] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:08.637 [2024-11-20 17:51:32.044317] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:08.637 [2024-11-20 17:51:32.044323] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:08.637 [2024-11-20 17:51:32.044330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.637 [2024-11-20 17:51:32.044335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:08.637 [2024-11-20 17:51:32.044342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:21:08.637 [2024-11-20 17:51:32.044353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.637 [2024-11-20 17:51:32.044418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.637 [2024-11-20 17:51:32.044424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:08.637 [2024-11-20 17:51:32.044431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:08.637 [2024-11-20 17:51:32.044436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.637 [2024-11-20 17:51:32.044514] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:08.637 [2024-11-20 17:51:32.044521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:08.637 [2024-11-20 17:51:32.044529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:08.637 [2024-11-20 17:51:32.044535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:08.637 [2024-11-20 17:51:32.044547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:08.637 [2024-11-20 17:51:32.044559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:08.637 [2024-11-20 17:51:32.044565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:08.637 [2024-11-20 17:51:32.044577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:08.637 [2024-11-20 17:51:32.044582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:08.637 [2024-11-20 17:51:32.044588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:08.637 [2024-11-20 17:51:32.044593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:08.637 [2024-11-20 17:51:32.044600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:08.637 [2024-11-20 17:51:32.044605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:08.637 [2024-11-20 17:51:32.044617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:08.637 [2024-11-20 17:51:32.044624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:08.637 [2024-11-20 17:51:32.044636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.637 [2024-11-20 17:51:32.044647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:08.637 [2024-11-20 17:51:32.044652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.637 [2024-11-20 17:51:32.044664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:08.637 [2024-11-20 17:51:32.044670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.637 [2024-11-20 17:51:32.044681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:08.637 [2024-11-20 17:51:32.044686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.637 [2024-11-20 17:51:32.044697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:08.637 [2024-11-20 17:51:32.044705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:08.637 [2024-11-20 17:51:32.044717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:08.637 [2024-11-20 17:51:32.044721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:08.637 [2024-11-20 17:51:32.044728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:08.637 [2024-11-20 17:51:32.044732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:08.637 [2024-11-20 17:51:32.044739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:08.637 [2024-11-20 17:51:32.044744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:08.637 [2024-11-20 17:51:32.044755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:08.637 [2024-11-20 17:51:32.044761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044765] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:08.637 [2024-11-20 17:51:32.044772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:08.637 [2024-11-20 17:51:32.044778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:08.637 [2024-11-20 17:51:32.044785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.637 [2024-11-20 17:51:32.044791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:08.637 [2024-11-20 17:51:32.044799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:08.637 [2024-11-20 17:51:32.044804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:08.637 [2024-11-20 17:51:32.044810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:08.637 [2024-11-20 17:51:32.044815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:08.638 [2024-11-20 17:51:32.044822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:08.638 [2024-11-20 17:51:32.044829] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:08.638 [2024-11-20 17:51:32.044837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:08.638 [2024-11-20 17:51:32.044846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:08.638 [2024-11-20 17:51:32.044854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:08.638 [2024-11-20 17:51:32.044859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:08.638 [2024-11-20 17:51:32.044866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:08.638 [2024-11-20 17:51:32.044882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:08.638 [2024-11-20 17:51:32.044889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:08.638 [2024-11-20 17:51:32.044894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:08.638 [2024-11-20 17:51:32.044901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:08.638 [2024-11-20 17:51:32.044906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:08.638 [2024-11-20 17:51:32.044915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:08.638 [2024-11-20 17:51:32.044920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:08.638 [2024-11-20 17:51:32.044927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:08.638 [2024-11-20 17:51:32.044932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:08.638 [2024-11-20 17:51:32.044939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:08.638 [2024-11-20 17:51:32.044945] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:08.638 [2024-11-20 17:51:32.044952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:08.638 [2024-11-20 17:51:32.044958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:08.638 [2024-11-20 17:51:32.044965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:08.638 [2024-11-20 17:51:32.044970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:08.638 [2024-11-20 17:51:32.044977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:08.638 [2024-11-20 17:51:32.044983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.638 [2024-11-20 17:51:32.044990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:08.638 [2024-11-20 17:51:32.044996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:21:08.638 [2024-11-20 17:51:32.045002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.638 [2024-11-20 17:51:32.045031] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:08.638 [2024-11-20 17:51:32.045041] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:12.838 [2024-11-20 17:51:35.614202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.614252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:12.838 [2024-11-20 17:51:35.614267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3569.160 ms 00:21:12.838 [2024-11-20 17:51:35.614277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.639665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.639707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.838 [2024-11-20 17:51:35.639719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.173 ms 00:21:12.838 [2024-11-20 17:51:35.639729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.639849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.639861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:12.838 [2024-11-20 17:51:35.639881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:12.838 [2024-11-20 17:51:35.639895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.669908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.669943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.838 [2024-11-20 17:51:35.669953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.980 ms 00:21:12.838 [2024-11-20 17:51:35.669963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.669989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.670002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.838 [2024-11-20 17:51:35.670010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:12.838 [2024-11-20 17:51:35.670019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.670371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.670402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.838 [2024-11-20 17:51:35.670411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:21:12.838 [2024-11-20 17:51:35.670420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.670521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.670531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.838 [2024-11-20 17:51:35.670541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:12.838 [2024-11-20 17:51:35.670553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.684281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.684316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:12.838 [2024-11-20 17:51:35.684326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.711 ms 00:21:12.838 [2024-11-20 17:51:35.684335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.709318] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:12.838 [2024-11-20 17:51:35.712597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.712628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:12.838 [2024-11-20 17:51:35.712641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.194 ms 00:21:12.838 [2024-11-20 17:51:35.712648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.793624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.793667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:12.838 [2024-11-20 17:51:35.793681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.939 ms 00:21:12.838 [2024-11-20 17:51:35.793689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.793866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.793890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:12.838 [2024-11-20 17:51:35.793902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:21:12.838 [2024-11-20 17:51:35.793910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.817131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.817166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:12.838 [2024-11-20 17:51:35.817177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.176 ms 00:21:12.838 [2024-11-20 17:51:35.817185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.839607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.839639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:12.838 [2024-11-20 17:51:35.839651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.384 ms 00:21:12.838 [2024-11-20 17:51:35.839658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.840230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.840250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:12.838 [2024-11-20 17:51:35.840260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:21:12.838 [2024-11-20 17:51:35.840269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.912432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.912468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:12.838 [2024-11-20 17:51:35.912482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.130 ms 00:21:12.838 [2024-11-20 17:51:35.912490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.936991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.937026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:12.838 [2024-11-20 17:51:35.937039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.434 ms 00:21:12.838 [2024-11-20 17:51:35.937046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.960302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.960334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:12.838 [2024-11-20 17:51:35.960346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.219 ms 00:21:12.838 [2024-11-20 17:51:35.960353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.984490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.984523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:12.838 [2024-11-20 17:51:35.984535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.101 ms 00:21:12.838 [2024-11-20 17:51:35.984542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.984578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.984587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:12.838 [2024-11-20 17:51:35.984600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:12.838 [2024-11-20 17:51:35.984607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.984690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.838 [2024-11-20 17:51:35.984700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:12.838 [2024-11-20 17:51:35.984712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:12.838 [2024-11-20 17:51:35.984719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.838 [2024-11-20 17:51:35.985889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3950.907 ms, result 0 00:21:12.838 { 00:21:12.838 "name": "ftl0", 00:21:12.838 "uuid": "582e7997-38e0-40d2-a69e-d470f323bfc0" 00:21:12.838 } 00:21:12.838 17:51:36 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:12.838 17:51:36 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:12.838 17:51:36 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:12.838 17:51:36 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:13.130 [2024-11-20 17:51:36.397164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.130 [2024-11-20 17:51:36.397211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:13.130 [2024-11-20 17:51:36.397224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:13.130 [2024-11-20 17:51:36.397239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.130 [2024-11-20 17:51:36.397261] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:13.130 [2024-11-20 17:51:36.399898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.130 [2024-11-20 17:51:36.399928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:13.131 [2024-11-20 17:51:36.399940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:21:13.131 [2024-11-20 17:51:36.399947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.131 [2024-11-20 17:51:36.400207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.131 [2024-11-20 17:51:36.400223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:13.131 [2024-11-20 17:51:36.400233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:21:13.131 [2024-11-20 17:51:36.400240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.131 [2024-11-20 17:51:36.403473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.131 [2024-11-20 17:51:36.403495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:13.131 [2024-11-20 17:51:36.403506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:21:13.131 [2024-11-20 17:51:36.403514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.131 [2024-11-20 17:51:36.409626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.131 [2024-11-20 17:51:36.409655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:13.131 [2024-11-20 17:51:36.409670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.094 ms 00:21:13.131 [2024-11-20 17:51:36.409678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.131 [2024-11-20 17:51:36.433533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.131 [2024-11-20 17:51:36.433567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:13.131 [2024-11-20 17:51:36.433579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.791 ms 00:21:13.131 [2024-11-20 17:51:36.433586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.131 [2024-11-20 17:51:36.449119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.131 [2024-11-20 17:51:36.449154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:13.131 [2024-11-20 17:51:36.449166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.496 ms 00:21:13.131 [2024-11-20 17:51:36.449173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.131 [2024-11-20 17:51:36.449314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.131 [2024-11-20 17:51:36.449324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:13.131 [2024-11-20 17:51:36.449335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:13.131 [2024-11-20 17:51:36.449342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.131 [2024-11-20 17:51:36.472724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.131 [2024-11-20 17:51:36.472757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:13.131 [2024-11-20 17:51:36.472768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.361 ms 00:21:13.131 [2024-11-20 17:51:36.472775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.131 [2024-11-20 17:51:36.495581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.131 [2024-11-20 17:51:36.495613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:13.131 [2024-11-20 17:51:36.495624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.773 ms 00:21:13.131 [2024-11-20 17:51:36.495631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.131 [2024-11-20 17:51:36.518109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.131 [2024-11-20 17:51:36.518140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:13.131 [2024-11-20 17:51:36.518151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.443 ms 00:21:13.131 [2024-11-20 17:51:36.518158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.131 [2024-11-20 17:51:36.540449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.131 [2024-11-20 17:51:36.540486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:13.131 [2024-11-20 17:51:36.540497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.224 ms 00:21:13.131 [2024-11-20 17:51:36.540503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.131 [2024-11-20 17:51:36.540536] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:13.131 [2024-11-20 17:51:36.540549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:13.131 [2024-11-20 17:51:36.540856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.540997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:13.132 [2024-11-20 17:51:36.541391] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:13.132 [2024-11-20 17:51:36.541401] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 582e7997-38e0-40d2-a69e-d470f323bfc0 00:21:13.132 [2024-11-20 17:51:36.541409] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:13.132 [2024-11-20 17:51:36.541419] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:13.132 [2024-11-20 17:51:36.541426] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:13.132 [2024-11-20 17:51:36.541437] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:13.132 [2024-11-20 17:51:36.541443] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:13.132 [2024-11-20 17:51:36.541452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:13.132 [2024-11-20 17:51:36.541458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:13.132 [2024-11-20 17:51:36.541466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:13.132 [2024-11-20 17:51:36.541473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:13.132 [2024-11-20 17:51:36.541481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.133 [2024-11-20 17:51:36.541488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:13.133 [2024-11-20 17:51:36.541497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:21:13.133 [2024-11-20 17:51:36.541504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.133 [2024-11-20 17:51:36.553597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.133 [2024-11-20 17:51:36.553627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:13.133 [2024-11-20 17:51:36.553638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.059 ms 00:21:13.133 [2024-11-20 17:51:36.553646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.133 [2024-11-20 17:51:36.553994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:13.133 [2024-11-20 17:51:36.554003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:13.133 [2024-11-20 17:51:36.554014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:21:13.133 [2024-11-20 17:51:36.554021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.133 [2024-11-20 17:51:36.595559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.133 [2024-11-20 17:51:36.595590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:13.133 [2024-11-20 17:51:36.595601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.133 [2024-11-20 17:51:36.595609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.133 [2024-11-20 17:51:36.595666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.133 [2024-11-20 17:51:36.595674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:13.133 [2024-11-20 17:51:36.595685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.133 [2024-11-20 17:51:36.595693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.133 [2024-11-20 17:51:36.595768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.133 [2024-11-20 17:51:36.595778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:13.133 [2024-11-20 17:51:36.595787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.133 [2024-11-20 17:51:36.595794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.133 [2024-11-20 17:51:36.595814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.133 [2024-11-20 17:51:36.595822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:13.133 [2024-11-20 17:51:36.595830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.133 [2024-11-20 17:51:36.595837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.392 [2024-11-20 17:51:36.670906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.392 [2024-11-20 17:51:36.670935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:13.392 [2024-11-20 17:51:36.670947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.392 [2024-11-20 17:51:36.670955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.392 [2024-11-20 17:51:36.732704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.392 [2024-11-20 17:51:36.732742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:13.392 [2024-11-20 17:51:36.732754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.392 [2024-11-20 17:51:36.732765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.392 [2024-11-20 17:51:36.732832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.392 [2024-11-20 17:51:36.732842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:13.392 [2024-11-20 17:51:36.732851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.392 [2024-11-20 17:51:36.732858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.392 [2024-11-20 17:51:36.732936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.392 [2024-11-20 17:51:36.732946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:13.392 [2024-11-20 17:51:36.732955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.392 [2024-11-20 17:51:36.732962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.392 [2024-11-20 17:51:36.733052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.392 [2024-11-20 17:51:36.733061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:13.392 [2024-11-20 17:51:36.733070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.392 [2024-11-20 17:51:36.733077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.392 [2024-11-20 17:51:36.733109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.392 [2024-11-20 17:51:36.733117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:13.392 [2024-11-20 17:51:36.733126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.392 [2024-11-20 17:51:36.733134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.392 [2024-11-20 17:51:36.733170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.392 [2024-11-20 17:51:36.733179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:13.392 [2024-11-20 17:51:36.733188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.392 [2024-11-20 17:51:36.733195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.392 [2024-11-20 17:51:36.733237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:13.392 [2024-11-20 17:51:36.733246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:13.392 [2024-11-20 17:51:36.733255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:13.392 [2024-11-20 17:51:36.733263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:13.392 [2024-11-20 17:51:36.733386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.188 ms, result 0 00:21:13.392 true 00:21:13.392 17:51:36 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77360 00:21:13.392 17:51:36 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77360 ']' 00:21:13.392 17:51:36 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77360 00:21:13.392 17:51:36 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:13.392 17:51:36 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.392 17:51:36 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77360 00:21:13.392 17:51:36 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.392 17:51:36 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.392 killing process with pid 77360 00:21:13.392 17:51:36 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77360' 00:21:13.392 17:51:36 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77360 00:21:13.392 17:51:36 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77360 00:21:21.521 17:51:43 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:24.825 262144+0 records in 00:21:24.825 262144+0 records out 00:21:24.825 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.10919 s, 261 MB/s 00:21:24.825 17:51:47 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:25.767 17:51:49 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:26.026 [2024-11-20 17:51:49.328254] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:21:26.026 [2024-11-20 17:51:49.328378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77587 ] 00:21:26.026 [2024-11-20 17:51:49.481415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.288 [2024-11-20 17:51:49.611382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.549 [2024-11-20 17:51:49.906537] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:26.549 [2024-11-20 17:51:49.906615] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:26.549 [2024-11-20 17:51:50.067479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.549 [2024-11-20 17:51:50.067551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:26.549 [2024-11-20 17:51:50.067570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:26.549 [2024-11-20 17:51:50.067579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.549 [2024-11-20 17:51:50.067632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.550 [2024-11-20 17:51:50.067642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:26.550 [2024-11-20 17:51:50.067655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:26.550 [2024-11-20 17:51:50.067663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.550 [2024-11-20 17:51:50.067689] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:26.550 [2024-11-20 17:51:50.068401] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:26.550 [2024-11-20 17:51:50.068428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.550 [2024-11-20 17:51:50.068438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:26.550 [2024-11-20 17:51:50.068447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:21:26.550 [2024-11-20 17:51:50.068455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.550 [2024-11-20 17:51:50.070191] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:26.550 [2024-11-20 17:51:50.084364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.550 [2024-11-20 17:51:50.084422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:26.550 [2024-11-20 17:51:50.084438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.175 ms 00:21:26.550 [2024-11-20 17:51:50.084448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.550 [2024-11-20 17:51:50.084541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.550 [2024-11-20 17:51:50.084552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:26.550 [2024-11-20 17:51:50.084561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:26.550 [2024-11-20 17:51:50.084569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.810 [2024-11-20 17:51:50.093218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.810 [2024-11-20 17:51:50.093265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:26.810 [2024-11-20 17:51:50.093276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.568 ms 00:21:26.810 [2024-11-20 17:51:50.093291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.810 [2024-11-20 17:51:50.093372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.810 [2024-11-20 17:51:50.093382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:26.810 [2024-11-20 17:51:50.093390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:26.810 [2024-11-20 17:51:50.093398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.810 [2024-11-20 17:51:50.093446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.810 [2024-11-20 17:51:50.093457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:26.810 [2024-11-20 17:51:50.093466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:26.810 [2024-11-20 17:51:50.093474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.810 [2024-11-20 17:51:50.093502] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:26.810 [2024-11-20 17:51:50.097694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.810 [2024-11-20 17:51:50.097738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:26.810 [2024-11-20 17:51:50.097749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.201 ms 00:21:26.810 [2024-11-20 17:51:50.097761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.810 [2024-11-20 17:51:50.097798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.810 [2024-11-20 17:51:50.097806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:26.810 [2024-11-20 17:51:50.097815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:26.810 [2024-11-20 17:51:50.097823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.810 [2024-11-20 17:51:50.097894] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:26.810 [2024-11-20 17:51:50.097919] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:26.810 [2024-11-20 17:51:50.097959] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:26.810 [2024-11-20 17:51:50.097980] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:26.810 [2024-11-20 17:51:50.098088] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:26.810 [2024-11-20 17:51:50.098100] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:26.810 [2024-11-20 17:51:50.098111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:26.810 [2024-11-20 17:51:50.098122] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:26.810 [2024-11-20 17:51:50.098131] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:26.810 [2024-11-20 17:51:50.098140] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:26.810 [2024-11-20 17:51:50.098148] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:26.810 [2024-11-20 17:51:50.098157] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:26.810 [2024-11-20 17:51:50.098167] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:26.810 [2024-11-20 17:51:50.098175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.810 [2024-11-20 17:51:50.098183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:26.810 [2024-11-20 17:51:50.098191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:21:26.810 [2024-11-20 17:51:50.098200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.810 [2024-11-20 17:51:50.098288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.810 [2024-11-20 17:51:50.098297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:26.810 [2024-11-20 17:51:50.098305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:26.810 [2024-11-20 17:51:50.098313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.810 [2024-11-20 17:51:50.098436] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:26.810 [2024-11-20 17:51:50.098463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:26.810 [2024-11-20 17:51:50.098473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:26.810 [2024-11-20 17:51:50.098481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.810 [2024-11-20 17:51:50.098489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:26.810 [2024-11-20 17:51:50.098498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:26.810 [2024-11-20 17:51:50.098506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:26.810 [2024-11-20 17:51:50.098513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:26.810 [2024-11-20 17:51:50.098520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:26.810 [2024-11-20 17:51:50.098527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:26.810 [2024-11-20 17:51:50.098534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:26.810 [2024-11-20 17:51:50.098542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:26.810 [2024-11-20 17:51:50.098551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:26.810 [2024-11-20 17:51:50.098559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:26.810 [2024-11-20 17:51:50.098566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:26.810 [2024-11-20 17:51:50.098580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.810 [2024-11-20 17:51:50.098587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:26.810 [2024-11-20 17:51:50.098595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:26.810 [2024-11-20 17:51:50.098602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.810 [2024-11-20 17:51:50.098608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:26.810 [2024-11-20 17:51:50.098616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:26.810 [2024-11-20 17:51:50.098622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.810 [2024-11-20 17:51:50.098629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:26.810 [2024-11-20 17:51:50.098636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:26.810 [2024-11-20 17:51:50.098643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.810 [2024-11-20 17:51:50.098649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:26.810 [2024-11-20 17:51:50.098656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:26.810 [2024-11-20 17:51:50.098663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.810 [2024-11-20 17:51:50.098670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:26.810 [2024-11-20 17:51:50.098677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:26.810 [2024-11-20 17:51:50.098684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.810 [2024-11-20 17:51:50.098690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:26.810 [2024-11-20 17:51:50.098697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:26.810 [2024-11-20 17:51:50.098703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:26.810 [2024-11-20 17:51:50.098710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:26.810 [2024-11-20 17:51:50.098717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:26.810 [2024-11-20 17:51:50.098723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:26.810 [2024-11-20 17:51:50.098729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:26.810 [2024-11-20 17:51:50.098736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:26.810 [2024-11-20 17:51:50.098742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.810 [2024-11-20 17:51:50.098748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:26.811 [2024-11-20 17:51:50.098754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:26.811 [2024-11-20 17:51:50.098761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.811 [2024-11-20 17:51:50.098767] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:26.811 [2024-11-20 17:51:50.098776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:26.811 [2024-11-20 17:51:50.098784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:26.811 [2024-11-20 17:51:50.098792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.811 [2024-11-20 17:51:50.098801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:26.811 [2024-11-20 17:51:50.098808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:26.811 [2024-11-20 17:51:50.098815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:26.811 [2024-11-20 17:51:50.098822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:26.811 [2024-11-20 17:51:50.098829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:26.811 [2024-11-20 17:51:50.098835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:26.811 [2024-11-20 17:51:50.098844] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:26.811 [2024-11-20 17:51:50.098853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:26.811 [2024-11-20 17:51:50.098861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:26.811 [2024-11-20 17:51:50.098868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:26.811 [2024-11-20 17:51:50.098900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:26.811 [2024-11-20 17:51:50.098909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:26.811 [2024-11-20 17:51:50.098917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:26.811 [2024-11-20 17:51:50.098925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:26.811 [2024-11-20 17:51:50.098932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:26.811 [2024-11-20 17:51:50.098940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:26.811 [2024-11-20 17:51:50.098948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:26.811 [2024-11-20 17:51:50.098955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:26.811 [2024-11-20 17:51:50.098963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:26.811 [2024-11-20 17:51:50.098972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:26.811 [2024-11-20 17:51:50.098983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:26.811 [2024-11-20 17:51:50.098991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:26.811 [2024-11-20 17:51:50.098999] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:26.811 [2024-11-20 17:51:50.099010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:26.811 [2024-11-20 17:51:50.099019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:26.811 [2024-11-20 17:51:50.099027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:26.811 [2024-11-20 17:51:50.099034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:26.811 [2024-11-20 17:51:50.099042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:26.811 [2024-11-20 17:51:50.099051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.099060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:26.811 [2024-11-20 17:51:50.099071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:21:26.811 [2024-11-20 17:51:50.099079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.131931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.131987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:26.811 [2024-11-20 17:51:50.131999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.805 ms 00:21:26.811 [2024-11-20 17:51:50.132012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.132109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.132118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:26.811 [2024-11-20 17:51:50.132127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:26.811 [2024-11-20 17:51:50.132135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.179826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.179911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:26.811 [2024-11-20 17:51:50.179925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.627 ms 00:21:26.811 [2024-11-20 17:51:50.179934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.179988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.179998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:26.811 [2024-11-20 17:51:50.180011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:26.811 [2024-11-20 17:51:50.180019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.180675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.180732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:26.811 [2024-11-20 17:51:50.180750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:21:26.811 [2024-11-20 17:51:50.180762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.180986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.181000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:26.811 [2024-11-20 17:51:50.181018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:21:26.811 [2024-11-20 17:51:50.181026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.197060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.197115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:26.811 [2024-11-20 17:51:50.197127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.008 ms 00:21:26.811 [2024-11-20 17:51:50.197135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.211498] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:26.811 [2024-11-20 17:51:50.211556] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:26.811 [2024-11-20 17:51:50.211570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.211579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:26.811 [2024-11-20 17:51:50.211590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.318 ms 00:21:26.811 [2024-11-20 17:51:50.211598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.237294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.237342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:26.811 [2024-11-20 17:51:50.237356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.636 ms 00:21:26.811 [2024-11-20 17:51:50.237365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.249726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.249771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:26.811 [2024-11-20 17:51:50.249782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.316 ms 00:21:26.811 [2024-11-20 17:51:50.249789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.261581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.261625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:26.811 [2024-11-20 17:51:50.261635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.755 ms 00:21:26.811 [2024-11-20 17:51:50.261642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.262343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.262375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:26.811 [2024-11-20 17:51:50.262398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:21:26.811 [2024-11-20 17:51:50.262409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.317985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.318030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:26.811 [2024-11-20 17:51:50.318042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.557 ms 00:21:26.811 [2024-11-20 17:51:50.318055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.328395] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:26.811 [2024-11-20 17:51:50.330653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.811 [2024-11-20 17:51:50.330686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:26.811 [2024-11-20 17:51:50.330697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.555 ms 00:21:26.811 [2024-11-20 17:51:50.330707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.811 [2024-11-20 17:51:50.330778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.812 [2024-11-20 17:51:50.330789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:26.812 [2024-11-20 17:51:50.330799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:26.812 [2024-11-20 17:51:50.330807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.812 [2024-11-20 17:51:50.330882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.812 [2024-11-20 17:51:50.330893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:26.812 [2024-11-20 17:51:50.330902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:26.812 [2024-11-20 17:51:50.330909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.812 [2024-11-20 17:51:50.330928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.812 [2024-11-20 17:51:50.330936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:26.812 [2024-11-20 17:51:50.330943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:26.812 [2024-11-20 17:51:50.330950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.812 [2024-11-20 17:51:50.330979] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:26.812 [2024-11-20 17:51:50.330989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.812 [2024-11-20 17:51:50.330997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:26.812 [2024-11-20 17:51:50.331004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:26.812 [2024-11-20 17:51:50.331011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.070 [2024-11-20 17:51:50.354739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.070 [2024-11-20 17:51:50.354772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:27.070 [2024-11-20 17:51:50.354783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.711 ms 00:21:27.070 [2024-11-20 17:51:50.354794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.070 [2024-11-20 17:51:50.354861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.070 [2024-11-20 17:51:50.354879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:27.070 [2024-11-20 17:51:50.354888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:27.070 [2024-11-20 17:51:50.354895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.070 [2024-11-20 17:51:50.356091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 288.187 ms, result 0 00:21:28.005  [2024-11-20T17:51:52.479Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-20T17:51:53.413Z] Copying: 42/1024 [MB] (24 MBps) [2024-11-20T17:51:54.810Z] Copying: 55/1024 [MB] (12 MBps) [2024-11-20T17:51:55.375Z] Copying: 74/1024 [MB] (18 MBps) [2024-11-20T17:51:56.749Z] Copying: 93/1024 [MB] (19 MBps) [2024-11-20T17:51:57.687Z] Copying: 113/1024 [MB] (20 MBps) [2024-11-20T17:51:58.620Z] Copying: 138/1024 [MB] (24 MBps) [2024-11-20T17:51:59.554Z] Copying: 159/1024 [MB] (20 MBps) [2024-11-20T17:52:00.489Z] Copying: 182/1024 [MB] (22 MBps) [2024-11-20T17:52:01.422Z] Copying: 210/1024 [MB] (28 MBps) [2024-11-20T17:52:02.796Z] Copying: 231/1024 [MB] (21 MBps) [2024-11-20T17:52:03.728Z] Copying: 256/1024 [MB] (24 MBps) [2024-11-20T17:52:04.663Z] Copying: 277/1024 [MB] (20 MBps) [2024-11-20T17:52:05.598Z] Copying: 296/1024 [MB] (19 MBps) [2024-11-20T17:52:06.533Z] Copying: 323/1024 [MB] (26 MBps) [2024-11-20T17:52:07.467Z] Copying: 370/1024 [MB] (47 MBps) [2024-11-20T17:52:08.402Z] Copying: 397/1024 [MB] (27 MBps) [2024-11-20T17:52:09.776Z] Copying: 413/1024 [MB] (15 MBps) [2024-11-20T17:52:10.709Z] Copying: 437/1024 [MB] (23 MBps) [2024-11-20T17:52:11.641Z] Copying: 490/1024 [MB] (52 MBps) [2024-11-20T17:52:12.575Z] Copying: 514/1024 [MB] (24 MBps) [2024-11-20T17:52:13.517Z] Copying: 537/1024 [MB] (22 MBps) [2024-11-20T17:52:14.474Z] Copying: 558/1024 [MB] (21 MBps) [2024-11-20T17:52:15.407Z] Copying: 581/1024 [MB] (22 MBps) [2024-11-20T17:52:16.778Z] Copying: 603/1024 [MB] (21 MBps) [2024-11-20T17:52:17.710Z] Copying: 628/1024 [MB] (25 MBps) [2024-11-20T17:52:18.643Z] Copying: 655/1024 [MB] (26 MBps) [2024-11-20T17:52:19.576Z] Copying: 681/1024 [MB] (25 MBps) [2024-11-20T17:52:20.508Z] Copying: 705/1024 [MB] (24 MBps) [2024-11-20T17:52:21.445Z] Copying: 726/1024 [MB] (21 MBps) [2024-11-20T17:52:22.376Z] Copying: 755/1024 [MB] (28 MBps) [2024-11-20T17:52:23.748Z] Copying: 776/1024 [MB] (21 MBps) [2024-11-20T17:52:24.682Z] Copying: 796/1024 [MB] (19 MBps) [2024-11-20T17:52:25.620Z] Copying: 817/1024 [MB] (20 MBps) [2024-11-20T17:52:26.558Z] Copying: 834/1024 [MB] (16 MBps) [2024-11-20T17:52:27.492Z] Copying: 855/1024 [MB] (21 MBps) [2024-11-20T17:52:28.433Z] Copying: 879/1024 [MB] (23 MBps) [2024-11-20T17:52:29.373Z] Copying: 895/1024 [MB] (15 MBps) [2024-11-20T17:52:30.759Z] Copying: 905/1024 [MB] (10 MBps) [2024-11-20T17:52:31.761Z] Copying: 919/1024 [MB] (14 MBps) [2024-11-20T17:52:32.703Z] Copying: 934/1024 [MB] (14 MBps) [2024-11-20T17:52:33.648Z] Copying: 947/1024 [MB] (13 MBps) [2024-11-20T17:52:34.593Z] Copying: 969/1024 [MB] (21 MBps) [2024-11-20T17:52:35.537Z] Copying: 983/1024 [MB] (14 MBps) [2024-11-20T17:52:36.479Z] Copying: 1001/1024 [MB] (17 MBps) [2024-11-20T17:52:36.739Z] Copying: 1020/1024 [MB] (19 MBps) [2024-11-20T17:52:36.739Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-20 17:52:36.648430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.199 [2024-11-20 17:52:36.648473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:13.199 [2024-11-20 17:52:36.648488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:13.199 [2024-11-20 17:52:36.648496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.199 [2024-11-20 17:52:36.648517] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:13.199 [2024-11-20 17:52:36.651217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.200 [2024-11-20 17:52:36.651246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:13.200 [2024-11-20 17:52:36.651261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.686 ms 00:22:13.200 [2024-11-20 17:52:36.651269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.200 [2024-11-20 17:52:36.653800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.200 [2024-11-20 17:52:36.653833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:13.200 [2024-11-20 17:52:36.653844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.508 ms 00:22:13.200 [2024-11-20 17:52:36.653851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.200 [2024-11-20 17:52:36.670458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.200 [2024-11-20 17:52:36.670493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:13.200 [2024-11-20 17:52:36.670504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.592 ms 00:22:13.200 [2024-11-20 17:52:36.670511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.200 [2024-11-20 17:52:36.676679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.200 [2024-11-20 17:52:36.676710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:13.200 [2024-11-20 17:52:36.676721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.129 ms 00:22:13.200 [2024-11-20 17:52:36.676729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.200 [2024-11-20 17:52:36.701799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.200 [2024-11-20 17:52:36.701837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:13.200 [2024-11-20 17:52:36.701847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.020 ms 00:22:13.200 [2024-11-20 17:52:36.701855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.200 [2024-11-20 17:52:36.716564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.200 [2024-11-20 17:52:36.716602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:13.200 [2024-11-20 17:52:36.716613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.661 ms 00:22:13.200 [2024-11-20 17:52:36.716621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.200 [2024-11-20 17:52:36.716745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.200 [2024-11-20 17:52:36.716760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:13.200 [2024-11-20 17:52:36.716769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:13.200 [2024-11-20 17:52:36.716777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.463 [2024-11-20 17:52:36.741734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.463 [2024-11-20 17:52:36.741774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:13.463 [2024-11-20 17:52:36.741785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.943 ms 00:22:13.463 [2024-11-20 17:52:36.741792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.463 [2024-11-20 17:52:36.766257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.463 [2024-11-20 17:52:36.766303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:13.463 [2024-11-20 17:52:36.766325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.427 ms 00:22:13.463 [2024-11-20 17:52:36.766332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.463 [2024-11-20 17:52:36.791422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.463 [2024-11-20 17:52:36.791477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:13.463 [2024-11-20 17:52:36.791488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.045 ms 00:22:13.463 [2024-11-20 17:52:36.791495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.463 [2024-11-20 17:52:36.817278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.463 [2024-11-20 17:52:36.817332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:13.463 [2024-11-20 17:52:36.817344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.691 ms 00:22:13.463 [2024-11-20 17:52:36.817351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.463 [2024-11-20 17:52:36.817397] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:13.463 [2024-11-20 17:52:36.817413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:13.463 [2024-11-20 17:52:36.817748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.817995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:13.464 [2024-11-20 17:52:36.818215] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:13.464 [2024-11-20 17:52:36.818227] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 582e7997-38e0-40d2-a69e-d470f323bfc0 00:22:13.464 [2024-11-20 17:52:36.818237] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:13.464 [2024-11-20 17:52:36.818244] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:13.464 [2024-11-20 17:52:36.818251] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:13.464 [2024-11-20 17:52:36.818259] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:13.464 [2024-11-20 17:52:36.818266] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:13.464 [2024-11-20 17:52:36.818274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:13.464 [2024-11-20 17:52:36.818281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:13.464 [2024-11-20 17:52:36.818294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:13.464 [2024-11-20 17:52:36.818300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:13.464 [2024-11-20 17:52:36.818308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.464 [2024-11-20 17:52:36.818316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:13.464 [2024-11-20 17:52:36.818325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:22:13.464 [2024-11-20 17:52:36.818332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.464 [2024-11-20 17:52:36.832268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.464 [2024-11-20 17:52:36.832311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:13.464 [2024-11-20 17:52:36.832322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.900 ms 00:22:13.464 [2024-11-20 17:52:36.832331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.464 [2024-11-20 17:52:36.832716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.464 [2024-11-20 17:52:36.832727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:13.464 [2024-11-20 17:52:36.832735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:22:13.464 [2024-11-20 17:52:36.832751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.464 [2024-11-20 17:52:36.869668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.464 [2024-11-20 17:52:36.869724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.464 [2024-11-20 17:52:36.869735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.464 [2024-11-20 17:52:36.869744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.464 [2024-11-20 17:52:36.869811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.464 [2024-11-20 17:52:36.869819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.464 [2024-11-20 17:52:36.869828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.464 [2024-11-20 17:52:36.869843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.465 [2024-11-20 17:52:36.869920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.465 [2024-11-20 17:52:36.869932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.465 [2024-11-20 17:52:36.869941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.465 [2024-11-20 17:52:36.869948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.465 [2024-11-20 17:52:36.869964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.465 [2024-11-20 17:52:36.869972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.465 [2024-11-20 17:52:36.869980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.465 [2024-11-20 17:52:36.869987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.465 [2024-11-20 17:52:36.956406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.465 [2024-11-20 17:52:36.956481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.465 [2024-11-20 17:52:36.956495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.465 [2024-11-20 17:52:36.956503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.727 [2024-11-20 17:52:37.026485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.727 [2024-11-20 17:52:37.026542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.727 [2024-11-20 17:52:37.026556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.727 [2024-11-20 17:52:37.026571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.727 [2024-11-20 17:52:37.026654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.727 [2024-11-20 17:52:37.026664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.727 [2024-11-20 17:52:37.026672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.727 [2024-11-20 17:52:37.026681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.727 [2024-11-20 17:52:37.026718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.727 [2024-11-20 17:52:37.026727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.727 [2024-11-20 17:52:37.026735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.727 [2024-11-20 17:52:37.026745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.727 [2024-11-20 17:52:37.026844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.727 [2024-11-20 17:52:37.026854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.727 [2024-11-20 17:52:37.026863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.727 [2024-11-20 17:52:37.026899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.727 [2024-11-20 17:52:37.026937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.727 [2024-11-20 17:52:37.026947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:13.727 [2024-11-20 17:52:37.026956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.727 [2024-11-20 17:52:37.026963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.727 [2024-11-20 17:52:37.027007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.727 [2024-11-20 17:52:37.027017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.727 [2024-11-20 17:52:37.027025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.727 [2024-11-20 17:52:37.027033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.727 [2024-11-20 17:52:37.027082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.727 [2024-11-20 17:52:37.027093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.727 [2024-11-20 17:52:37.027102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.727 [2024-11-20 17:52:37.027110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.727 [2024-11-20 17:52:37.027243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 378.771 ms, result 0 00:22:14.671 00:22:14.671 00:22:14.671 17:52:37 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:14.671 [2024-11-20 17:52:37.950956] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:22:14.671 [2024-11-20 17:52:37.951658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78091 ] 00:22:14.671 [2024-11-20 17:52:38.112353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.930 [2024-11-20 17:52:38.232675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.189 [2024-11-20 17:52:38.522683] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:15.189 [2024-11-20 17:52:38.522740] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:15.189 [2024-11-20 17:52:38.679657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.189 [2024-11-20 17:52:38.679701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:15.189 [2024-11-20 17:52:38.679718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:15.189 [2024-11-20 17:52:38.679725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.189 [2024-11-20 17:52:38.679771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.190 [2024-11-20 17:52:38.679781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.190 [2024-11-20 17:52:38.679791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:15.190 [2024-11-20 17:52:38.679798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.190 [2024-11-20 17:52:38.679814] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:15.190 [2024-11-20 17:52:38.680497] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:15.190 [2024-11-20 17:52:38.680518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.190 [2024-11-20 17:52:38.680526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.190 [2024-11-20 17:52:38.680534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:22:15.190 [2024-11-20 17:52:38.680542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.190 [2024-11-20 17:52:38.681612] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:15.190 [2024-11-20 17:52:38.694406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.190 [2024-11-20 17:52:38.694436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:15.190 [2024-11-20 17:52:38.694447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.782 ms 00:22:15.190 [2024-11-20 17:52:38.694455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.190 [2024-11-20 17:52:38.694509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.190 [2024-11-20 17:52:38.694517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:15.190 [2024-11-20 17:52:38.694525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:15.190 [2024-11-20 17:52:38.694532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.190 [2024-11-20 17:52:38.699483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.190 [2024-11-20 17:52:38.699511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.190 [2024-11-20 17:52:38.699520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.901 ms 00:22:15.190 [2024-11-20 17:52:38.699531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.190 [2024-11-20 17:52:38.699595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.190 [2024-11-20 17:52:38.699604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.190 [2024-11-20 17:52:38.699611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:15.190 [2024-11-20 17:52:38.699618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.190 [2024-11-20 17:52:38.699664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.190 [2024-11-20 17:52:38.699674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:15.190 [2024-11-20 17:52:38.699682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:15.190 [2024-11-20 17:52:38.699688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.190 [2024-11-20 17:52:38.699711] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:15.190 [2024-11-20 17:52:38.702890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.190 [2024-11-20 17:52:38.702916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.190 [2024-11-20 17:52:38.702925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.186 ms 00:22:15.190 [2024-11-20 17:52:38.702934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.190 [2024-11-20 17:52:38.702962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.190 [2024-11-20 17:52:38.702969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:15.190 [2024-11-20 17:52:38.702977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:15.190 [2024-11-20 17:52:38.702984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.190 [2024-11-20 17:52:38.703003] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:15.190 [2024-11-20 17:52:38.703020] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:15.190 [2024-11-20 17:52:38.703053] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:15.190 [2024-11-20 17:52:38.703069] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:15.190 [2024-11-20 17:52:38.703170] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:15.190 [2024-11-20 17:52:38.703180] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:15.190 [2024-11-20 17:52:38.703190] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:15.190 [2024-11-20 17:52:38.703199] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:15.190 [2024-11-20 17:52:38.703208] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:15.190 [2024-11-20 17:52:38.703216] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:15.190 [2024-11-20 17:52:38.703223] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:15.190 [2024-11-20 17:52:38.703229] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:15.190 [2024-11-20 17:52:38.703239] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:15.190 [2024-11-20 17:52:38.703247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.190 [2024-11-20 17:52:38.703254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:15.190 [2024-11-20 17:52:38.703261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:22:15.190 [2024-11-20 17:52:38.703267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.190 [2024-11-20 17:52:38.703349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.190 [2024-11-20 17:52:38.703357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:15.190 [2024-11-20 17:52:38.703364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:15.190 [2024-11-20 17:52:38.703370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.190 [2024-11-20 17:52:38.703471] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:15.190 [2024-11-20 17:52:38.703480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:15.190 [2024-11-20 17:52:38.703488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.190 [2024-11-20 17:52:38.703495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.190 [2024-11-20 17:52:38.703502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:15.190 [2024-11-20 17:52:38.703509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:15.190 [2024-11-20 17:52:38.703515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:15.190 [2024-11-20 17:52:38.703523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:15.190 [2024-11-20 17:52:38.703530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:15.190 [2024-11-20 17:52:38.703537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.190 [2024-11-20 17:52:38.703544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:15.190 [2024-11-20 17:52:38.703552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:15.190 [2024-11-20 17:52:38.703558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:15.190 [2024-11-20 17:52:38.703565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:15.190 [2024-11-20 17:52:38.703572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:15.190 [2024-11-20 17:52:38.703583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.190 [2024-11-20 17:52:38.703589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:15.190 [2024-11-20 17:52:38.703595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:15.190 [2024-11-20 17:52:38.703602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.190 [2024-11-20 17:52:38.703608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:15.191 [2024-11-20 17:52:38.703615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:15.191 [2024-11-20 17:52:38.703621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.191 [2024-11-20 17:52:38.703627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:15.191 [2024-11-20 17:52:38.703634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:15.191 [2024-11-20 17:52:38.703640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.191 [2024-11-20 17:52:38.703647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:15.191 [2024-11-20 17:52:38.703653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:15.191 [2024-11-20 17:52:38.703659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.191 [2024-11-20 17:52:38.703666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:15.191 [2024-11-20 17:52:38.703672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:15.191 [2024-11-20 17:52:38.703678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:15.191 [2024-11-20 17:52:38.703684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:15.191 [2024-11-20 17:52:38.703691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:15.191 [2024-11-20 17:52:38.703697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.191 [2024-11-20 17:52:38.703703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:15.191 [2024-11-20 17:52:38.703710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:15.191 [2024-11-20 17:52:38.703716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:15.191 [2024-11-20 17:52:38.703722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:15.191 [2024-11-20 17:52:38.703729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:15.191 [2024-11-20 17:52:38.703735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.191 [2024-11-20 17:52:38.703742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:15.191 [2024-11-20 17:52:38.703748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:15.191 [2024-11-20 17:52:38.703755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.191 [2024-11-20 17:52:38.703762] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:15.191 [2024-11-20 17:52:38.703770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:15.191 [2024-11-20 17:52:38.703776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:15.191 [2024-11-20 17:52:38.703783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:15.191 [2024-11-20 17:52:38.703791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:15.191 [2024-11-20 17:52:38.703797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:15.191 [2024-11-20 17:52:38.703803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:15.191 [2024-11-20 17:52:38.703810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:15.191 [2024-11-20 17:52:38.703816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:15.191 [2024-11-20 17:52:38.703822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:15.191 [2024-11-20 17:52:38.703830] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:15.191 [2024-11-20 17:52:38.703839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.191 [2024-11-20 17:52:38.703847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:15.191 [2024-11-20 17:52:38.703854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:15.191 [2024-11-20 17:52:38.703861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:15.191 [2024-11-20 17:52:38.703879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:15.191 [2024-11-20 17:52:38.703887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:15.191 [2024-11-20 17:52:38.703895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:15.191 [2024-11-20 17:52:38.703901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:15.191 [2024-11-20 17:52:38.703908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:15.191 [2024-11-20 17:52:38.703915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:15.191 [2024-11-20 17:52:38.703922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:15.191 [2024-11-20 17:52:38.703929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:15.191 [2024-11-20 17:52:38.703936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:15.191 [2024-11-20 17:52:38.703942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:15.191 [2024-11-20 17:52:38.703949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:15.191 [2024-11-20 17:52:38.703956] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:15.191 [2024-11-20 17:52:38.703966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:15.191 [2024-11-20 17:52:38.703975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:15.191 [2024-11-20 17:52:38.703982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:15.191 [2024-11-20 17:52:38.703989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:15.191 [2024-11-20 17:52:38.703997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:15.191 [2024-11-20 17:52:38.704005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.191 [2024-11-20 17:52:38.704013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:15.191 [2024-11-20 17:52:38.704020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:22:15.191 [2024-11-20 17:52:38.704027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.459 [2024-11-20 17:52:38.729713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.459 [2024-11-20 17:52:38.729746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:15.459 [2024-11-20 17:52:38.729756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.635 ms 00:22:15.459 [2024-11-20 17:52:38.729763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.459 [2024-11-20 17:52:38.729841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.459 [2024-11-20 17:52:38.729849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:15.459 [2024-11-20 17:52:38.729857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:15.459 [2024-11-20 17:52:38.729864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.459 [2024-11-20 17:52:38.770422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.459 [2024-11-20 17:52:38.770457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:15.459 [2024-11-20 17:52:38.770469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.495 ms 00:22:15.459 [2024-11-20 17:52:38.770477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.459 [2024-11-20 17:52:38.770514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.459 [2024-11-20 17:52:38.770523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:15.459 [2024-11-20 17:52:38.770535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:15.459 [2024-11-20 17:52:38.770542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.459 [2024-11-20 17:52:38.770910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.459 [2024-11-20 17:52:38.770926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:15.459 [2024-11-20 17:52:38.770935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:22:15.459 [2024-11-20 17:52:38.770943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.771079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.771088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:15.460 [2024-11-20 17:52:38.771096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:22:15.460 [2024-11-20 17:52:38.771107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.784085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.784114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:15.460 [2024-11-20 17:52:38.784124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.960 ms 00:22:15.460 [2024-11-20 17:52:38.784134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.796993] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:15.460 [2024-11-20 17:52:38.797025] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:15.460 [2024-11-20 17:52:38.797037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.797045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:15.460 [2024-11-20 17:52:38.797053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.803 ms 00:22:15.460 [2024-11-20 17:52:38.797059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.821269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.821301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:15.460 [2024-11-20 17:52:38.821312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.174 ms 00:22:15.460 [2024-11-20 17:52:38.821319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.832942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.832981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:15.460 [2024-11-20 17:52:38.832992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.588 ms 00:22:15.460 [2024-11-20 17:52:38.832998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.844768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.844796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:15.460 [2024-11-20 17:52:38.844806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.739 ms 00:22:15.460 [2024-11-20 17:52:38.844813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.845409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.845433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:15.460 [2024-11-20 17:52:38.845442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:22:15.460 [2024-11-20 17:52:38.845452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.901204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.901394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:15.460 [2024-11-20 17:52:38.901418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.736 ms 00:22:15.460 [2024-11-20 17:52:38.901426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.911859] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:15.460 [2024-11-20 17:52:38.914042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.914073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:15.460 [2024-11-20 17:52:38.914086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.336 ms 00:22:15.460 [2024-11-20 17:52:38.914095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.914186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.914198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:15.460 [2024-11-20 17:52:38.914208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:15.460 [2024-11-20 17:52:38.914219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.914285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.914295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:15.460 [2024-11-20 17:52:38.914303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:15.460 [2024-11-20 17:52:38.914310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.914330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.914338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:15.460 [2024-11-20 17:52:38.914346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:15.460 [2024-11-20 17:52:38.914354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.914384] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:15.460 [2024-11-20 17:52:38.914403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.914411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:15.460 [2024-11-20 17:52:38.914418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:15.460 [2024-11-20 17:52:38.914425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.937516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.937548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:15.460 [2024-11-20 17:52:38.937559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.074 ms 00:22:15.460 [2024-11-20 17:52:38.937570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.937636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.460 [2024-11-20 17:52:38.937645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:15.460 [2024-11-20 17:52:38.937654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:15.460 [2024-11-20 17:52:38.937661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.460 [2024-11-20 17:52:38.938545] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.479 ms, result 0 00:22:16.841  [2024-11-20T17:52:41.324Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-20T17:52:42.266Z] Copying: 23/1024 [MB] (11 MBps) [2024-11-20T17:52:43.210Z] Copying: 35/1024 [MB] (12 MBps) [2024-11-20T17:52:44.154Z] Copying: 46/1024 [MB] (10 MBps) [2024-11-20T17:52:45.540Z] Copying: 65/1024 [MB] (19 MBps) [2024-11-20T17:52:46.135Z] Copying: 84/1024 [MB] (19 MBps) [2024-11-20T17:52:47.537Z] Copying: 100/1024 [MB] (15 MBps) [2024-11-20T17:52:48.480Z] Copying: 115/1024 [MB] (15 MBps) [2024-11-20T17:52:49.419Z] Copying: 135/1024 [MB] (19 MBps) [2024-11-20T17:52:50.361Z] Copying: 153/1024 [MB] (18 MBps) [2024-11-20T17:52:51.295Z] Copying: 172/1024 [MB] (18 MBps) [2024-11-20T17:52:52.230Z] Copying: 188/1024 [MB] (16 MBps) [2024-11-20T17:52:53.166Z] Copying: 210/1024 [MB] (21 MBps) [2024-11-20T17:52:54.540Z] Copying: 227/1024 [MB] (16 MBps) [2024-11-20T17:52:55.477Z] Copying: 243/1024 [MB] (16 MBps) [2024-11-20T17:52:56.420Z] Copying: 262/1024 [MB] (19 MBps) [2024-11-20T17:52:57.362Z] Copying: 283/1024 [MB] (20 MBps) [2024-11-20T17:52:58.305Z] Copying: 301/1024 [MB] (17 MBps) [2024-11-20T17:52:59.239Z] Copying: 335/1024 [MB] (34 MBps) [2024-11-20T17:53:00.173Z] Copying: 354/1024 [MB] (18 MBps) [2024-11-20T17:53:01.549Z] Copying: 372/1024 [MB] (17 MBps) [2024-11-20T17:53:02.170Z] Copying: 394/1024 [MB] (22 MBps) [2024-11-20T17:53:03.557Z] Copying: 411/1024 [MB] (17 MBps) [2024-11-20T17:53:04.129Z] Copying: 428/1024 [MB] (16 MBps) [2024-11-20T17:53:05.514Z] Copying: 446/1024 [MB] (18 MBps) [2024-11-20T17:53:06.459Z] Copying: 460/1024 [MB] (13 MBps) [2024-11-20T17:53:07.403Z] Copying: 472/1024 [MB] (12 MBps) [2024-11-20T17:53:08.346Z] Copying: 486/1024 [MB] (13 MBps) [2024-11-20T17:53:09.288Z] Copying: 502/1024 [MB] (15 MBps) [2024-11-20T17:53:10.229Z] Copying: 513/1024 [MB] (10 MBps) [2024-11-20T17:53:11.174Z] Copying: 525/1024 [MB] (12 MBps) [2024-11-20T17:53:12.116Z] Copying: 550/1024 [MB] (25 MBps) [2024-11-20T17:53:13.503Z] Copying: 570/1024 [MB] (19 MBps) [2024-11-20T17:53:14.446Z] Copying: 588/1024 [MB] (17 MBps) [2024-11-20T17:53:15.390Z] Copying: 599/1024 [MB] (11 MBps) [2024-11-20T17:53:16.329Z] Copying: 617/1024 [MB] (18 MBps) [2024-11-20T17:53:17.263Z] Copying: 633/1024 [MB] (15 MBps) [2024-11-20T17:53:18.196Z] Copying: 663/1024 [MB] (30 MBps) [2024-11-20T17:53:19.129Z] Copying: 687/1024 [MB] (23 MBps) [2024-11-20T17:53:20.502Z] Copying: 706/1024 [MB] (19 MBps) [2024-11-20T17:53:21.435Z] Copying: 724/1024 [MB] (18 MBps) [2024-11-20T17:53:22.461Z] Copying: 746/1024 [MB] (21 MBps) [2024-11-20T17:53:23.431Z] Copying: 765/1024 [MB] (19 MBps) [2024-11-20T17:53:24.364Z] Copying: 788/1024 [MB] (22 MBps) [2024-11-20T17:53:25.297Z] Copying: 816/1024 [MB] (28 MBps) [2024-11-20T17:53:26.228Z] Copying: 834/1024 [MB] (17 MBps) [2024-11-20T17:53:27.158Z] Copying: 849/1024 [MB] (15 MBps) [2024-11-20T17:53:28.531Z] Copying: 862/1024 [MB] (13 MBps) [2024-11-20T17:53:29.464Z] Copying: 884/1024 [MB] (21 MBps) [2024-11-20T17:53:30.398Z] Copying: 898/1024 [MB] (13 MBps) [2024-11-20T17:53:31.333Z] Copying: 912/1024 [MB] (14 MBps) [2024-11-20T17:53:32.268Z] Copying: 928/1024 [MB] (15 MBps) [2024-11-20T17:53:33.202Z] Copying: 946/1024 [MB] (17 MBps) [2024-11-20T17:53:34.135Z] Copying: 959/1024 [MB] (13 MBps) [2024-11-20T17:53:35.508Z] Copying: 973/1024 [MB] (13 MBps) [2024-11-20T17:53:36.441Z] Copying: 987/1024 [MB] (13 MBps) [2024-11-20T17:53:37.375Z] Copying: 1000/1024 [MB] (13 MBps) [2024-11-20T17:53:37.941Z] Copying: 1014/1024 [MB] (14 MBps) [2024-11-20T17:53:38.201Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-20 17:53:38.028805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.661 [2024-11-20 17:53:38.028884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:14.661 [2024-11-20 17:53:38.028898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:14.661 [2024-11-20 17:53:38.028906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.661 [2024-11-20 17:53:38.028928] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:14.661 [2024-11-20 17:53:38.032174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.661 [2024-11-20 17:53:38.032205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:14.661 [2024-11-20 17:53:38.032220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:23:14.661 [2024-11-20 17:53:38.032228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.661 [2024-11-20 17:53:38.032445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.661 [2024-11-20 17:53:38.032455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:14.661 [2024-11-20 17:53:38.032463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:23:14.661 [2024-11-20 17:53:38.032471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.661 [2024-11-20 17:53:38.036142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.661 [2024-11-20 17:53:38.036268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:14.661 [2024-11-20 17:53:38.036283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.659 ms 00:23:14.661 [2024-11-20 17:53:38.036290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.661 [2024-11-20 17:53:38.042426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.661 [2024-11-20 17:53:38.042453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:14.661 [2024-11-20 17:53:38.042463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.111 ms 00:23:14.661 [2024-11-20 17:53:38.042470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.661 [2024-11-20 17:53:38.067701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.661 [2024-11-20 17:53:38.067826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:14.661 [2024-11-20 17:53:38.067842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.173 ms 00:23:14.661 [2024-11-20 17:53:38.067850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.661 [2024-11-20 17:53:38.081949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.661 [2024-11-20 17:53:38.082068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:14.661 [2024-11-20 17:53:38.082085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.057 ms 00:23:14.661 [2024-11-20 17:53:38.082093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.661 [2024-11-20 17:53:38.082201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.661 [2024-11-20 17:53:38.082216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:14.661 [2024-11-20 17:53:38.082224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:23:14.661 [2024-11-20 17:53:38.082231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.661 [2024-11-20 17:53:38.105501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.661 [2024-11-20 17:53:38.105611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:14.661 [2024-11-20 17:53:38.105626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.256 ms 00:23:14.661 [2024-11-20 17:53:38.105633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.661 [2024-11-20 17:53:38.128562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.661 [2024-11-20 17:53:38.128679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:14.661 [2024-11-20 17:53:38.128693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.903 ms 00:23:14.661 [2024-11-20 17:53:38.128700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.662 [2024-11-20 17:53:38.151164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.662 [2024-11-20 17:53:38.151269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:14.662 [2024-11-20 17:53:38.151284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.438 ms 00:23:14.662 [2024-11-20 17:53:38.151291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.662 [2024-11-20 17:53:38.173536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.662 [2024-11-20 17:53:38.173563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:14.662 [2024-11-20 17:53:38.173572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.198 ms 00:23:14.662 [2024-11-20 17:53:38.173580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.662 [2024-11-20 17:53:38.173608] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:14.662 [2024-11-20 17:53:38.173622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.173994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:14.662 [2024-11-20 17:53:38.174106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:14.663 [2024-11-20 17:53:38.174393] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:14.663 [2024-11-20 17:53:38.174403] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 582e7997-38e0-40d2-a69e-d470f323bfc0 00:23:14.663 [2024-11-20 17:53:38.174410] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:14.663 [2024-11-20 17:53:38.174417] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:14.663 [2024-11-20 17:53:38.174423] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:14.663 [2024-11-20 17:53:38.174431] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:14.663 [2024-11-20 17:53:38.174437] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:14.663 [2024-11-20 17:53:38.174445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:14.663 [2024-11-20 17:53:38.174458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:14.663 [2024-11-20 17:53:38.174464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:14.663 [2024-11-20 17:53:38.174470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:14.663 [2024-11-20 17:53:38.174477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.663 [2024-11-20 17:53:38.174484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:14.663 [2024-11-20 17:53:38.174492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:23:14.663 [2024-11-20 17:53:38.174506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.663 [2024-11-20 17:53:38.186766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.663 [2024-11-20 17:53:38.186865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:14.663 [2024-11-20 17:53:38.186888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.242 ms 00:23:14.663 [2024-11-20 17:53:38.186896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.663 [2024-11-20 17:53:38.187227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.663 [2024-11-20 17:53:38.187240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:14.663 [2024-11-20 17:53:38.187249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:23:14.663 [2024-11-20 17:53:38.187259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.219695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.219725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:14.922 [2024-11-20 17:53:38.219734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.219742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.219791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.219798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:14.922 [2024-11-20 17:53:38.219806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.219816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.219863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.219892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:14.922 [2024-11-20 17:53:38.219900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.219907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.219921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.219928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:14.922 [2024-11-20 17:53:38.219936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.219943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.295517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.295553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:14.922 [2024-11-20 17:53:38.295564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.295572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.357718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.357854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.922 [2024-11-20 17:53:38.357886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.357900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.357962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.357971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.922 [2024-11-20 17:53:38.357979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.357986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.358018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.358027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.922 [2024-11-20 17:53:38.358034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.358042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.358132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.358141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.922 [2024-11-20 17:53:38.358149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.358156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.358183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.358191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:14.922 [2024-11-20 17:53:38.358199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.358206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.358240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.358248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.922 [2024-11-20 17:53:38.358256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.358263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.358300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.922 [2024-11-20 17:53:38.358309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.922 [2024-11-20 17:53:38.358317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.922 [2024-11-20 17:53:38.358324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.922 [2024-11-20 17:53:38.358430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.599 ms, result 0 00:23:15.858 00:23:15.858 00:23:15.858 17:53:39 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:17.760 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:17.760 17:53:41 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:17.760 [2024-11-20 17:53:41.259979] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:23:17.760 [2024-11-20 17:53:41.260093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78742 ] 00:23:18.018 [2024-11-20 17:53:41.413709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.018 [2024-11-20 17:53:41.509689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.277 [2024-11-20 17:53:41.765011] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.277 [2024-11-20 17:53:41.765063] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.536 [2024-11-20 17:53:41.922707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-11-20 17:53:41.922748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:18.536 [2024-11-20 17:53:41.922763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:18.536 [2024-11-20 17:53:41.922772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-11-20 17:53:41.922817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-11-20 17:53:41.922827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:18.536 [2024-11-20 17:53:41.922837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:18.536 [2024-11-20 17:53:41.922844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-11-20 17:53:41.922859] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:18.536 [2024-11-20 17:53:41.923582] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:18.536 [2024-11-20 17:53:41.923603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-11-20 17:53:41.923610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:18.536 [2024-11-20 17:53:41.923619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:23:18.536 [2024-11-20 17:53:41.923626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-11-20 17:53:41.924665] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:18.536 [2024-11-20 17:53:41.937310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-11-20 17:53:41.937338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:18.536 [2024-11-20 17:53:41.937350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.647 ms 00:23:18.536 [2024-11-20 17:53:41.937357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-11-20 17:53:41.937412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-11-20 17:53:41.937421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:18.536 [2024-11-20 17:53:41.937429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:18.536 [2024-11-20 17:53:41.937436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-11-20 17:53:41.942166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-11-20 17:53:41.942193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:18.536 [2024-11-20 17:53:41.942202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.681 ms 00:23:18.536 [2024-11-20 17:53:41.942213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-11-20 17:53:41.942277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-11-20 17:53:41.942286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:18.536 [2024-11-20 17:53:41.942294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:18.536 [2024-11-20 17:53:41.942301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-11-20 17:53:41.942346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-11-20 17:53:41.942356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:18.536 [2024-11-20 17:53:41.942363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:18.536 [2024-11-20 17:53:41.942370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-11-20 17:53:41.942392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:18.536 [2024-11-20 17:53:41.945684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-11-20 17:53:41.945708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:18.536 [2024-11-20 17:53:41.945717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:23:18.536 [2024-11-20 17:53:41.945727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.536 [2024-11-20 17:53:41.945753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.536 [2024-11-20 17:53:41.945761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:18.536 [2024-11-20 17:53:41.945769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:18.537 [2024-11-20 17:53:41.945776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.537 [2024-11-20 17:53:41.945793] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:18.537 [2024-11-20 17:53:41.945810] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:18.537 [2024-11-20 17:53:41.945844] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:18.537 [2024-11-20 17:53:41.945860] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:18.537 [2024-11-20 17:53:41.945970] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:18.537 [2024-11-20 17:53:41.945981] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:18.537 [2024-11-20 17:53:41.945992] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:18.537 [2024-11-20 17:53:41.946002] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:18.537 [2024-11-20 17:53:41.946010] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:18.537 [2024-11-20 17:53:41.946018] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:18.537 [2024-11-20 17:53:41.946025] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:18.537 [2024-11-20 17:53:41.946032] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:18.537 [2024-11-20 17:53:41.946041] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:18.537 [2024-11-20 17:53:41.946049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.537 [2024-11-20 17:53:41.946056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:18.537 [2024-11-20 17:53:41.946063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:23:18.537 [2024-11-20 17:53:41.946070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.537 [2024-11-20 17:53:41.946152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.537 [2024-11-20 17:53:41.946160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:18.537 [2024-11-20 17:53:41.946167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:18.537 [2024-11-20 17:53:41.946173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.537 [2024-11-20 17:53:41.946274] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:18.537 [2024-11-20 17:53:41.946283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:18.537 [2024-11-20 17:53:41.946291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.537 [2024-11-20 17:53:41.946299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:18.537 [2024-11-20 17:53:41.946313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:18.537 [2024-11-20 17:53:41.946327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:18.537 [2024-11-20 17:53:41.946334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.537 [2024-11-20 17:53:41.946347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:18.537 [2024-11-20 17:53:41.946353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:18.537 [2024-11-20 17:53:41.946360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.537 [2024-11-20 17:53:41.946366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:18.537 [2024-11-20 17:53:41.946375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:18.537 [2024-11-20 17:53:41.946386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:18.537 [2024-11-20 17:53:41.946399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:18.537 [2024-11-20 17:53:41.946405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:18.537 [2024-11-20 17:53:41.946417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.537 [2024-11-20 17:53:41.946430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:18.537 [2024-11-20 17:53:41.946437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.537 [2024-11-20 17:53:41.946449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:18.537 [2024-11-20 17:53:41.946455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.537 [2024-11-20 17:53:41.946467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:18.537 [2024-11-20 17:53:41.946473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.537 [2024-11-20 17:53:41.946486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:18.537 [2024-11-20 17:53:41.946492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.537 [2024-11-20 17:53:41.946504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:18.537 [2024-11-20 17:53:41.946518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:18.537 [2024-11-20 17:53:41.946524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.537 [2024-11-20 17:53:41.946531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:18.537 [2024-11-20 17:53:41.946537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:18.537 [2024-11-20 17:53:41.946544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:18.537 [2024-11-20 17:53:41.946557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:18.537 [2024-11-20 17:53:41.946564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946570] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:18.537 [2024-11-20 17:53:41.946578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:18.537 [2024-11-20 17:53:41.946584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.537 [2024-11-20 17:53:41.946592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.537 [2024-11-20 17:53:41.946600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:18.537 [2024-11-20 17:53:41.946607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:18.537 [2024-11-20 17:53:41.946613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:18.537 [2024-11-20 17:53:41.946620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:18.537 [2024-11-20 17:53:41.946626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:18.537 [2024-11-20 17:53:41.946633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:18.537 [2024-11-20 17:53:41.946640] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:18.537 [2024-11-20 17:53:41.946650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.537 [2024-11-20 17:53:41.946658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:18.537 [2024-11-20 17:53:41.946665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:18.537 [2024-11-20 17:53:41.946672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:18.537 [2024-11-20 17:53:41.946679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:18.537 [2024-11-20 17:53:41.946686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:18.537 [2024-11-20 17:53:41.946694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:18.537 [2024-11-20 17:53:41.946700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:18.537 [2024-11-20 17:53:41.946707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:18.537 [2024-11-20 17:53:41.946714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:18.537 [2024-11-20 17:53:41.946721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:18.537 [2024-11-20 17:53:41.946728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:18.537 [2024-11-20 17:53:41.946735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:18.537 [2024-11-20 17:53:41.946742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:18.537 [2024-11-20 17:53:41.946749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:18.537 [2024-11-20 17:53:41.946756] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:18.537 [2024-11-20 17:53:41.946766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.537 [2024-11-20 17:53:41.946774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:18.537 [2024-11-20 17:53:41.946781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:18.538 [2024-11-20 17:53:41.946788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:18.538 [2024-11-20 17:53:41.946795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:18.538 [2024-11-20 17:53:41.946803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.538 [2024-11-20 17:53:41.946810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:18.538 [2024-11-20 17:53:41.946817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:23:18.538 [2024-11-20 17:53:41.946824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.538 [2024-11-20 17:53:41.972429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.538 [2024-11-20 17:53:41.972552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:18.538 [2024-11-20 17:53:41.972607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.349 ms 00:23:18.538 [2024-11-20 17:53:41.972629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.538 [2024-11-20 17:53:41.972730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.538 [2024-11-20 17:53:41.972752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:18.538 [2024-11-20 17:53:41.972771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:18.538 [2024-11-20 17:53:41.972788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.538 [2024-11-20 17:53:42.017093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.538 [2024-11-20 17:53:42.017230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:18.538 [2024-11-20 17:53:42.017293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.249 ms 00:23:18.538 [2024-11-20 17:53:42.017317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.538 [2024-11-20 17:53:42.017367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.538 [2024-11-20 17:53:42.017391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:18.538 [2024-11-20 17:53:42.017416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:18.538 [2024-11-20 17:53:42.017434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.538 [2024-11-20 17:53:42.017786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.538 [2024-11-20 17:53:42.017823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:18.538 [2024-11-20 17:53:42.017843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:23:18.538 [2024-11-20 17:53:42.017862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.538 [2024-11-20 17:53:42.018023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.538 [2024-11-20 17:53:42.018048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:18.538 [2024-11-20 17:53:42.018112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:18.538 [2024-11-20 17:53:42.018175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.538 [2024-11-20 17:53:42.031081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.538 [2024-11-20 17:53:42.031185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:18.538 [2024-11-20 17:53:42.031240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.871 ms 00:23:18.538 [2024-11-20 17:53:42.031263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.538 [2024-11-20 17:53:42.043683] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:18.538 [2024-11-20 17:53:42.043806] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:18.538 [2024-11-20 17:53:42.043864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.538 [2024-11-20 17:53:42.043895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:18.538 [2024-11-20 17:53:42.043916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.502 ms 00:23:18.538 [2024-11-20 17:53:42.043934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.538 [2024-11-20 17:53:42.068321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.538 [2024-11-20 17:53:42.068435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:18.538 [2024-11-20 17:53:42.068484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.345 ms 00:23:18.538 [2024-11-20 17:53:42.068506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.080318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-11-20 17:53:42.080443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:18.797 [2024-11-20 17:53:42.080460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.513 ms 00:23:18.797 [2024-11-20 17:53:42.080469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.092176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-11-20 17:53:42.092206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:18.797 [2024-11-20 17:53:42.092217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.676 ms 00:23:18.797 [2024-11-20 17:53:42.092224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.092811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-11-20 17:53:42.092834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:18.797 [2024-11-20 17:53:42.092843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:23:18.797 [2024-11-20 17:53:42.092853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.147749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-11-20 17:53:42.147926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:18.797 [2024-11-20 17:53:42.147948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.879 ms 00:23:18.797 [2024-11-20 17:53:42.147955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.158100] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:18.797 [2024-11-20 17:53:42.160216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-11-20 17:53:42.160244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:18.797 [2024-11-20 17:53:42.160255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.229 ms 00:23:18.797 [2024-11-20 17:53:42.160263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.160339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-11-20 17:53:42.160350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:18.797 [2024-11-20 17:53:42.160360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:18.797 [2024-11-20 17:53:42.160371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.160435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-11-20 17:53:42.160446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:18.797 [2024-11-20 17:53:42.160454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:18.797 [2024-11-20 17:53:42.160463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.160482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-11-20 17:53:42.160491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:18.797 [2024-11-20 17:53:42.160500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:18.797 [2024-11-20 17:53:42.160507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.160537] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:18.797 [2024-11-20 17:53:42.160546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-11-20 17:53:42.160553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:18.797 [2024-11-20 17:53:42.160560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:18.797 [2024-11-20 17:53:42.160567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.183484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-11-20 17:53:42.183602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:18.797 [2024-11-20 17:53:42.183618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.900 ms 00:23:18.797 [2024-11-20 17:53:42.183631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.183693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.797 [2024-11-20 17:53:42.183703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:18.797 [2024-11-20 17:53:42.183711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:18.797 [2024-11-20 17:53:42.183718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.797 [2024-11-20 17:53:42.184636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 261.534 ms, result 0 00:23:19.744  [2024-11-20T17:53:44.254Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-20T17:53:45.629Z] Copying: 33/1024 [MB] (19 MBps) [2024-11-20T17:53:46.564Z] Copying: 52/1024 [MB] (18 MBps) [2024-11-20T17:53:47.500Z] Copying: 90/1024 [MB] (38 MBps) [2024-11-20T17:53:48.436Z] Copying: 105/1024 [MB] (14 MBps) [2024-11-20T17:53:49.368Z] Copying: 130/1024 [MB] (25 MBps) [2024-11-20T17:53:50.303Z] Copying: 154/1024 [MB] (23 MBps) [2024-11-20T17:53:51.237Z] Copying: 178/1024 [MB] (24 MBps) [2024-11-20T17:53:52.612Z] Copying: 202/1024 [MB] (23 MBps) [2024-11-20T17:53:53.547Z] Copying: 224/1024 [MB] (22 MBps) [2024-11-20T17:53:54.478Z] Copying: 250/1024 [MB] (25 MBps) [2024-11-20T17:53:55.412Z] Copying: 284/1024 [MB] (34 MBps) [2024-11-20T17:53:56.346Z] Copying: 309/1024 [MB] (25 MBps) [2024-11-20T17:53:57.281Z] Copying: 329/1024 [MB] (20 MBps) [2024-11-20T17:53:58.213Z] Copying: 353/1024 [MB] (24 MBps) [2024-11-20T17:53:59.588Z] Copying: 378/1024 [MB] (24 MBps) [2024-11-20T17:54:00.521Z] Copying: 396/1024 [MB] (18 MBps) [2024-11-20T17:54:01.455Z] Copying: 415/1024 [MB] (19 MBps) [2024-11-20T17:54:02.387Z] Copying: 444/1024 [MB] (28 MBps) [2024-11-20T17:54:03.321Z] Copying: 490/1024 [MB] (46 MBps) [2024-11-20T17:54:04.270Z] Copying: 519/1024 [MB] (29 MBps) [2024-11-20T17:54:05.220Z] Copying: 546/1024 [MB] (26 MBps) [2024-11-20T17:54:06.595Z] Copying: 570/1024 [MB] (24 MBps) [2024-11-20T17:54:07.529Z] Copying: 594/1024 [MB] (23 MBps) [2024-11-20T17:54:08.472Z] Copying: 610/1024 [MB] (15 MBps) [2024-11-20T17:54:09.414Z] Copying: 621/1024 [MB] (11 MBps) [2024-11-20T17:54:10.360Z] Copying: 632/1024 [MB] (10 MBps) [2024-11-20T17:54:11.305Z] Copying: 645/1024 [MB] (13 MBps) [2024-11-20T17:54:12.250Z] Copying: 658/1024 [MB] (12 MBps) [2024-11-20T17:54:13.637Z] Copying: 669/1024 [MB] (10 MBps) [2024-11-20T17:54:14.209Z] Copying: 702/1024 [MB] (33 MBps) [2024-11-20T17:54:15.593Z] Copying: 731/1024 [MB] (28 MBps) [2024-11-20T17:54:16.537Z] Copying: 745/1024 [MB] (13 MBps) [2024-11-20T17:54:17.483Z] Copying: 757/1024 [MB] (12 MBps) [2024-11-20T17:54:18.428Z] Copying: 768/1024 [MB] (10 MBps) [2024-11-20T17:54:19.380Z] Copying: 778/1024 [MB] (10 MBps) [2024-11-20T17:54:20.329Z] Copying: 791/1024 [MB] (12 MBps) [2024-11-20T17:54:21.270Z] Copying: 804/1024 [MB] (12 MBps) [2024-11-20T17:54:22.214Z] Copying: 820/1024 [MB] (16 MBps) [2024-11-20T17:54:23.601Z] Copying: 832/1024 [MB] (11 MBps) [2024-11-20T17:54:24.544Z] Copying: 852/1024 [MB] (19 MBps) [2024-11-20T17:54:25.488Z] Copying: 866/1024 [MB] (13 MBps) [2024-11-20T17:54:26.511Z] Copying: 876/1024 [MB] (10 MBps) [2024-11-20T17:54:27.454Z] Copying: 887/1024 [MB] (10 MBps) [2024-11-20T17:54:28.395Z] Copying: 897/1024 [MB] (10 MBps) [2024-11-20T17:54:29.343Z] Copying: 911/1024 [MB] (14 MBps) [2024-11-20T17:54:30.290Z] Copying: 922/1024 [MB] (10 MBps) [2024-11-20T17:54:31.235Z] Copying: 939/1024 [MB] (16 MBps) [2024-11-20T17:54:32.622Z] Copying: 956/1024 [MB] (17 MBps) [2024-11-20T17:54:33.568Z] Copying: 972/1024 [MB] (16 MBps) [2024-11-20T17:54:34.511Z] Copying: 984/1024 [MB] (11 MBps) [2024-11-20T17:54:35.457Z] Copying: 995/1024 [MB] (10 MBps) [2024-11-20T17:54:36.401Z] Copying: 1011/1024 [MB] (15 MBps) [2024-11-20T17:54:37.346Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-20T17:54:37.346Z] Copying: 1048536/1048576 [kB] (928 kBps) [2024-11-20T17:54:37.346Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-20 17:54:37.249541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.806 [2024-11-20 17:54:37.249620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:13.806 [2024-11-20 17:54:37.249638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:13.806 [2024-11-20 17:54:37.249657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.806 [2024-11-20 17:54:37.251986] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:13.806 [2024-11-20 17:54:37.257185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.806 [2024-11-20 17:54:37.257359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:13.806 [2024-11-20 17:54:37.257382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.153 ms 00:24:13.806 [2024-11-20 17:54:37.257391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.806 [2024-11-20 17:54:37.270352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.806 [2024-11-20 17:54:37.270397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:13.806 [2024-11-20 17:54:37.270409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.841 ms 00:24:13.806 [2024-11-20 17:54:37.270425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.806 [2024-11-20 17:54:37.293599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.806 [2024-11-20 17:54:37.293646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:13.806 [2024-11-20 17:54:37.293659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.156 ms 00:24:13.806 [2024-11-20 17:54:37.293667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.806 [2024-11-20 17:54:37.299846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.806 [2024-11-20 17:54:37.299905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:13.806 [2024-11-20 17:54:37.299918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.128 ms 00:24:13.806 [2024-11-20 17:54:37.299927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.806 [2024-11-20 17:54:37.326393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.806 [2024-11-20 17:54:37.326440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:13.806 [2024-11-20 17:54:37.326452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.410 ms 00:24:13.806 [2024-11-20 17:54:37.326461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.806 [2024-11-20 17:54:37.342904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.806 [2024-11-20 17:54:37.343097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:13.806 [2024-11-20 17:54:37.343119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.373 ms 00:24:13.806 [2024-11-20 17:54:37.343128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.066 [2024-11-20 17:54:37.520616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.066 [2024-11-20 17:54:37.520671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:14.067 [2024-11-20 17:54:37.520686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 177.442 ms 00:24:14.067 [2024-11-20 17:54:37.520695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.067 [2024-11-20 17:54:37.545899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.067 [2024-11-20 17:54:37.545948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:14.067 [2024-11-20 17:54:37.545960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.186 ms 00:24:14.067 [2024-11-20 17:54:37.545968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.067 [2024-11-20 17:54:37.571174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.067 [2024-11-20 17:54:37.571230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:14.067 [2024-11-20 17:54:37.571241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.161 ms 00:24:14.067 [2024-11-20 17:54:37.571249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.067 [2024-11-20 17:54:37.595717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.067 [2024-11-20 17:54:37.595761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:14.067 [2024-11-20 17:54:37.595772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.425 ms 00:24:14.067 [2024-11-20 17:54:37.595780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.328 [2024-11-20 17:54:37.619773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.328 [2024-11-20 17:54:37.619986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:14.328 [2024-11-20 17:54:37.620008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.923 ms 00:24:14.328 [2024-11-20 17:54:37.620016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.328 [2024-11-20 17:54:37.620055] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:14.328 [2024-11-20 17:54:37.620072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101888 / 261120 wr_cnt: 1 state: open 00:24:14.328 [2024-11-20 17:54:37.620084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:14.328 [2024-11-20 17:54:37.620093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:14.328 [2024-11-20 17:54:37.620102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:14.328 [2024-11-20 17:54:37.620110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:14.328 [2024-11-20 17:54:37.620119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:14.328 [2024-11-20 17:54:37.620126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:14.328 [2024-11-20 17:54:37.620135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:14.328 [2024-11-20 17:54:37.620143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:14.329 [2024-11-20 17:54:37.620908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:14.330 [2024-11-20 17:54:37.620916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:14.330 [2024-11-20 17:54:37.620925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:14.330 [2024-11-20 17:54:37.620941] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:14.330 [2024-11-20 17:54:37.620950] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 582e7997-38e0-40d2-a69e-d470f323bfc0 00:24:14.330 [2024-11-20 17:54:37.620958] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101888 00:24:14.330 [2024-11-20 17:54:37.620966] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102848 00:24:14.330 [2024-11-20 17:54:37.620974] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101888 00:24:14.330 [2024-11-20 17:54:37.620983] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:24:14.330 [2024-11-20 17:54:37.620991] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:14.330 [2024-11-20 17:54:37.621004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:14.330 [2024-11-20 17:54:37.621020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:14.330 [2024-11-20 17:54:37.621027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:14.330 [2024-11-20 17:54:37.621034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:14.330 [2024-11-20 17:54:37.621041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.330 [2024-11-20 17:54:37.621050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:14.330 [2024-11-20 17:54:37.621058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:24:14.330 [2024-11-20 17:54:37.621067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.634828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.330 [2024-11-20 17:54:37.635003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:14.330 [2024-11-20 17:54:37.635060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.741 ms 00:24:14.330 [2024-11-20 17:54:37.635093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.635507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.330 [2024-11-20 17:54:37.635542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:14.330 [2024-11-20 17:54:37.635614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:24:14.330 [2024-11-20 17:54:37.635638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.671808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.671996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.330 [2024-11-20 17:54:37.672062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.672088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.672177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.672202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.330 [2024-11-20 17:54:37.672223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.672242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.672408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.672717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.330 [2024-11-20 17:54:37.672779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.672803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.672836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.672908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.330 [2024-11-20 17:54:37.672933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.672952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.756633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.756812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.330 [2024-11-20 17:54:37.756891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.756916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.825650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.825825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.330 [2024-11-20 17:54:37.825906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.825932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.826005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.826032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.330 [2024-11-20 17:54:37.826117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.826148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.826225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.826277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.330 [2024-11-20 17:54:37.826300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.826320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.826462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.826491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.330 [2024-11-20 17:54:37.826511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.826531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.826587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.826611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:14.330 [2024-11-20 17:54:37.826631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.826668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.826723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.826975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.330 [2024-11-20 17:54:37.827002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.827022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.827095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.330 [2024-11-20 17:54:37.827121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.330 [2024-11-20 17:54:37.827141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.330 [2024-11-20 17:54:37.827160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.330 [2024-11-20 17:54:37.827309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 581.463 ms, result 0 00:24:15.716 00:24:15.716 00:24:15.716 17:54:39 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:15.716 [2024-11-20 17:54:39.130470] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:24:15.716 [2024-11-20 17:54:39.130899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79349 ] 00:24:15.978 [2024-11-20 17:54:39.294167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.978 [2024-11-20 17:54:39.411786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.239 [2024-11-20 17:54:39.706749] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:16.239 [2024-11-20 17:54:39.706824] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:16.502 [2024-11-20 17:54:39.867408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.502 [2024-11-20 17:54:39.867466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:16.502 [2024-11-20 17:54:39.867485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:16.502 [2024-11-20 17:54:39.867495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.502 [2024-11-20 17:54:39.867549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.502 [2024-11-20 17:54:39.867561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:16.502 [2024-11-20 17:54:39.867573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:16.502 [2024-11-20 17:54:39.867582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.502 [2024-11-20 17:54:39.867602] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:16.502 [2024-11-20 17:54:39.868709] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:16.502 [2024-11-20 17:54:39.868767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.502 [2024-11-20 17:54:39.868779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:16.502 [2024-11-20 17:54:39.868790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:24:16.502 [2024-11-20 17:54:39.868798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.502 [2024-11-20 17:54:39.870485] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:16.502 [2024-11-20 17:54:39.884957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.502 [2024-11-20 17:54:39.885020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:16.502 [2024-11-20 17:54:39.885033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.473 ms 00:24:16.502 [2024-11-20 17:54:39.885041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.502 [2024-11-20 17:54:39.885118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.502 [2024-11-20 17:54:39.885128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:16.502 [2024-11-20 17:54:39.885138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:16.502 [2024-11-20 17:54:39.885146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.502 [2024-11-20 17:54:39.893130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.502 [2024-11-20 17:54:39.893171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:16.502 [2024-11-20 17:54:39.893181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.903 ms 00:24:16.502 [2024-11-20 17:54:39.893195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.502 [2024-11-20 17:54:39.893274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.502 [2024-11-20 17:54:39.893284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:16.502 [2024-11-20 17:54:39.893292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:16.502 [2024-11-20 17:54:39.893300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.502 [2024-11-20 17:54:39.893344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.502 [2024-11-20 17:54:39.893354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:16.502 [2024-11-20 17:54:39.893363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:16.502 [2024-11-20 17:54:39.893371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.502 [2024-11-20 17:54:39.893398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:16.502 [2024-11-20 17:54:39.897460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.502 [2024-11-20 17:54:39.897499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:16.502 [2024-11-20 17:54:39.897510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.072 ms 00:24:16.502 [2024-11-20 17:54:39.897521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.502 [2024-11-20 17:54:39.897563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.502 [2024-11-20 17:54:39.897571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:16.502 [2024-11-20 17:54:39.897580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:16.502 [2024-11-20 17:54:39.897588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.502 [2024-11-20 17:54:39.897639] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:16.502 [2024-11-20 17:54:39.897661] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:16.502 [2024-11-20 17:54:39.897698] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:16.502 [2024-11-20 17:54:39.897718] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:16.502 [2024-11-20 17:54:39.897825] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:16.502 [2024-11-20 17:54:39.897836] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:16.502 [2024-11-20 17:54:39.897846] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:16.503 [2024-11-20 17:54:39.897857] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:16.503 [2024-11-20 17:54:39.897887] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:16.503 [2024-11-20 17:54:39.897898] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:16.503 [2024-11-20 17:54:39.897906] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:16.503 [2024-11-20 17:54:39.897914] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:16.503 [2024-11-20 17:54:39.897925] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:16.503 [2024-11-20 17:54:39.897934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.503 [2024-11-20 17:54:39.897943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:16.503 [2024-11-20 17:54:39.897952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:24:16.503 [2024-11-20 17:54:39.897959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.503 [2024-11-20 17:54:39.898043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.503 [2024-11-20 17:54:39.898052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:16.503 [2024-11-20 17:54:39.898060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:16.503 [2024-11-20 17:54:39.898067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.503 [2024-11-20 17:54:39.898175] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:16.503 [2024-11-20 17:54:39.898185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:16.503 [2024-11-20 17:54:39.898194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.503 [2024-11-20 17:54:39.898202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:16.503 [2024-11-20 17:54:39.898217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:16.503 [2024-11-20 17:54:39.898232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:16.503 [2024-11-20 17:54:39.898239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.503 [2024-11-20 17:54:39.898254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:16.503 [2024-11-20 17:54:39.898261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:16.503 [2024-11-20 17:54:39.898268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.503 [2024-11-20 17:54:39.898275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:16.503 [2024-11-20 17:54:39.898287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:16.503 [2024-11-20 17:54:39.898300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:16.503 [2024-11-20 17:54:39.898315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:16.503 [2024-11-20 17:54:39.898322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:16.503 [2024-11-20 17:54:39.898337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.503 [2024-11-20 17:54:39.898350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:16.503 [2024-11-20 17:54:39.898357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.503 [2024-11-20 17:54:39.898371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:16.503 [2024-11-20 17:54:39.898378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.503 [2024-11-20 17:54:39.898392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:16.503 [2024-11-20 17:54:39.898399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.503 [2024-11-20 17:54:39.898413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:16.503 [2024-11-20 17:54:39.898420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.503 [2024-11-20 17:54:39.898434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:16.503 [2024-11-20 17:54:39.898441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:16.503 [2024-11-20 17:54:39.898447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.503 [2024-11-20 17:54:39.898454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:16.503 [2024-11-20 17:54:39.898461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:16.503 [2024-11-20 17:54:39.898467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:16.503 [2024-11-20 17:54:39.898480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:16.503 [2024-11-20 17:54:39.898488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898494] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:16.503 [2024-11-20 17:54:39.898502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:16.503 [2024-11-20 17:54:39.898510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.503 [2024-11-20 17:54:39.898520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.503 [2024-11-20 17:54:39.898528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:16.503 [2024-11-20 17:54:39.898535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:16.503 [2024-11-20 17:54:39.898541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:16.503 [2024-11-20 17:54:39.898549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:16.503 [2024-11-20 17:54:39.898555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:16.503 [2024-11-20 17:54:39.898561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:16.503 [2024-11-20 17:54:39.898570] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:16.503 [2024-11-20 17:54:39.898579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.503 [2024-11-20 17:54:39.898587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:16.503 [2024-11-20 17:54:39.898594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:16.503 [2024-11-20 17:54:39.898601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:16.503 [2024-11-20 17:54:39.898608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:16.503 [2024-11-20 17:54:39.898615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:16.503 [2024-11-20 17:54:39.898622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:16.503 [2024-11-20 17:54:39.898630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:16.503 [2024-11-20 17:54:39.898637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:16.503 [2024-11-20 17:54:39.898670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:16.503 [2024-11-20 17:54:39.898679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:16.503 [2024-11-20 17:54:39.898686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:16.503 [2024-11-20 17:54:39.898693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:16.503 [2024-11-20 17:54:39.898700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:16.503 [2024-11-20 17:54:39.898708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:16.503 [2024-11-20 17:54:39.898715] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:16.503 [2024-11-20 17:54:39.898732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.503 [2024-11-20 17:54:39.898741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:16.503 [2024-11-20 17:54:39.898749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:16.503 [2024-11-20 17:54:39.898756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:16.503 [2024-11-20 17:54:39.898764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:16.503 [2024-11-20 17:54:39.898771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.503 [2024-11-20 17:54:39.898779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:16.503 [2024-11-20 17:54:39.898787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:24:16.503 [2024-11-20 17:54:39.898797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.503 [2024-11-20 17:54:39.930522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.503 [2024-11-20 17:54:39.930579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:16.503 [2024-11-20 17:54:39.930593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.679 ms 00:24:16.503 [2024-11-20 17:54:39.930600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.503 [2024-11-20 17:54:39.930709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.503 [2024-11-20 17:54:39.930719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:16.504 [2024-11-20 17:54:39.930728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:16.504 [2024-11-20 17:54:39.930735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.504 [2024-11-20 17:54:39.981649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.504 [2024-11-20 17:54:39.981719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:16.504 [2024-11-20 17:54:39.981736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.846 ms 00:24:16.504 [2024-11-20 17:54:39.981745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.504 [2024-11-20 17:54:39.981819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.504 [2024-11-20 17:54:39.981830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:16.504 [2024-11-20 17:54:39.981844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:16.504 [2024-11-20 17:54:39.981853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.504 [2024-11-20 17:54:39.982503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.504 [2024-11-20 17:54:39.982556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:16.504 [2024-11-20 17:54:39.982569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:24:16.504 [2024-11-20 17:54:39.982577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.504 [2024-11-20 17:54:39.982767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.504 [2024-11-20 17:54:39.982777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:16.504 [2024-11-20 17:54:39.982786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:24:16.504 [2024-11-20 17:54:39.982799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.504 [2024-11-20 17:54:39.998456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.504 [2024-11-20 17:54:39.998503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:16.504 [2024-11-20 17:54:39.998517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.637 ms 00:24:16.504 [2024-11-20 17:54:39.998526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.504 [2024-11-20 17:54:40.012745] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:16.504 [2024-11-20 17:54:40.012966] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:16.504 [2024-11-20 17:54:40.012989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.504 [2024-11-20 17:54:40.012999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:16.504 [2024-11-20 17:54:40.013009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.324 ms 00:24:16.504 [2024-11-20 17:54:40.013017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.039126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.039183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:16.766 [2024-11-20 17:54:40.039196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.055 ms 00:24:16.766 [2024-11-20 17:54:40.039205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.052319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.052520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:16.766 [2024-11-20 17:54:40.052543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.045 ms 00:24:16.766 [2024-11-20 17:54:40.052552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.065770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.065823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:16.766 [2024-11-20 17:54:40.065837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.176 ms 00:24:16.766 [2024-11-20 17:54:40.065846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.066544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.066577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:16.766 [2024-11-20 17:54:40.066589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:24:16.766 [2024-11-20 17:54:40.066600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.134191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.134523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:16.766 [2024-11-20 17:54:40.134559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.568 ms 00:24:16.766 [2024-11-20 17:54:40.134569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.146410] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:16.766 [2024-11-20 17:54:40.149951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.149995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:16.766 [2024-11-20 17:54:40.150009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.322 ms 00:24:16.766 [2024-11-20 17:54:40.150018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.150134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.150147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:16.766 [2024-11-20 17:54:40.150158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:16.766 [2024-11-20 17:54:40.150169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.151963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.152007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:16.766 [2024-11-20 17:54:40.152019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:24:16.766 [2024-11-20 17:54:40.152027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.152062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.152072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:16.766 [2024-11-20 17:54:40.152081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:16.766 [2024-11-20 17:54:40.152089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.152138] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:16.766 [2024-11-20 17:54:40.152149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.152158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:16.766 [2024-11-20 17:54:40.152166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:16.766 [2024-11-20 17:54:40.152175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.178936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.178998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:16.766 [2024-11-20 17:54:40.179014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.740 ms 00:24:16.766 [2024-11-20 17:54:40.179029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.179129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.766 [2024-11-20 17:54:40.179141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:16.766 [2024-11-20 17:54:40.179150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:16.766 [2024-11-20 17:54:40.179159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.766 [2024-11-20 17:54:40.180456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.541 ms, result 0 00:24:18.153  [2024-11-20T17:54:42.637Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-20T17:54:43.581Z] Copying: 34/1024 [MB] (19 MBps) [2024-11-20T17:54:44.526Z] Copying: 52/1024 [MB] (17 MBps) [2024-11-20T17:54:45.471Z] Copying: 63/1024 [MB] (11 MBps) [2024-11-20T17:54:46.410Z] Copying: 99/1024 [MB] (35 MBps) [2024-11-20T17:54:47.793Z] Copying: 127/1024 [MB] (28 MBps) [2024-11-20T17:54:48.736Z] Copying: 162/1024 [MB] (34 MBps) [2024-11-20T17:54:49.681Z] Copying: 195/1024 [MB] (33 MBps) [2024-11-20T17:54:50.624Z] Copying: 231/1024 [MB] (36 MBps) [2024-11-20T17:54:51.569Z] Copying: 266/1024 [MB] (34 MBps) [2024-11-20T17:54:52.515Z] Copying: 301/1024 [MB] (35 MBps) [2024-11-20T17:54:53.460Z] Copying: 332/1024 [MB] (30 MBps) [2024-11-20T17:54:54.404Z] Copying: 352/1024 [MB] (20 MBps) [2024-11-20T17:54:55.850Z] Copying: 384/1024 [MB] (32 MBps) [2024-11-20T17:54:56.424Z] Copying: 419/1024 [MB] (34 MBps) [2024-11-20T17:54:57.813Z] Copying: 433/1024 [MB] (14 MBps) [2024-11-20T17:54:58.380Z] Copying: 445/1024 [MB] (11 MBps) [2024-11-20T17:54:59.752Z] Copying: 463/1024 [MB] (17 MBps) [2024-11-20T17:55:00.685Z] Copying: 492/1024 [MB] (28 MBps) [2024-11-20T17:55:01.624Z] Copying: 522/1024 [MB] (30 MBps) [2024-11-20T17:55:02.565Z] Copying: 545/1024 [MB] (22 MBps) [2024-11-20T17:55:03.499Z] Copying: 559/1024 [MB] (14 MBps) [2024-11-20T17:55:04.437Z] Copying: 579/1024 [MB] (19 MBps) [2024-11-20T17:55:05.808Z] Copying: 607/1024 [MB] (28 MBps) [2024-11-20T17:55:06.376Z] Copying: 631/1024 [MB] (23 MBps) [2024-11-20T17:55:07.747Z] Copying: 679/1024 [MB] (47 MBps) [2024-11-20T17:55:08.682Z] Copying: 728/1024 [MB] (49 MBps) [2024-11-20T17:55:09.729Z] Copying: 776/1024 [MB] (48 MBps) [2024-11-20T17:55:10.696Z] Copying: 815/1024 [MB] (38 MBps) [2024-11-20T17:55:11.634Z] Copying: 840/1024 [MB] (25 MBps) [2024-11-20T17:55:12.568Z] Copying: 865/1024 [MB] (24 MBps) [2024-11-20T17:55:13.502Z] Copying: 882/1024 [MB] (17 MBps) [2024-11-20T17:55:14.441Z] Copying: 916/1024 [MB] (34 MBps) [2024-11-20T17:55:15.385Z] Copying: 943/1024 [MB] (26 MBps) [2024-11-20T17:55:16.769Z] Copying: 962/1024 [MB] (18 MBps) [2024-11-20T17:55:17.709Z] Copying: 976/1024 [MB] (14 MBps) [2024-11-20T17:55:18.648Z] Copying: 996/1024 [MB] (20 MBps) [2024-11-20T17:55:18.908Z] Copying: 1017/1024 [MB] (21 MBps) [2024-11-20T17:55:19.169Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-20 17:55:18.995960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.629 [2024-11-20 17:55:18.996025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:55.630 [2024-11-20 17:55:18.996041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:55.630 [2024-11-20 17:55:18.996055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.630 [2024-11-20 17:55:18.996081] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:55.630 [2024-11-20 17:55:18.999757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.630 [2024-11-20 17:55:18.999890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:55.630 [2024-11-20 17:55:18.999957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:24:55.630 [2024-11-20 17:55:18.999984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.630 [2024-11-20 17:55:19.000262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.630 [2024-11-20 17:55:19.000292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:55.630 [2024-11-20 17:55:19.000315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:24:55.630 [2024-11-20 17:55:19.000378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.630 [2024-11-20 17:55:19.006668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.630 [2024-11-20 17:55:19.006824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:55.630 [2024-11-20 17:55:19.006899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.209 ms 00:24:55.630 [2024-11-20 17:55:19.006924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.630 [2024-11-20 17:55:19.013134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.630 [2024-11-20 17:55:19.013245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:55.630 [2024-11-20 17:55:19.013297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.165 ms 00:24:55.630 [2024-11-20 17:55:19.013320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.630 [2024-11-20 17:55:19.038659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.630 [2024-11-20 17:55:19.038815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:55.630 [2024-11-20 17:55:19.038833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.282 ms 00:24:55.630 [2024-11-20 17:55:19.038842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.630 [2024-11-20 17:55:19.054157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.630 [2024-11-20 17:55:19.054217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:55.630 [2024-11-20 17:55:19.054232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.863 ms 00:24:55.630 [2024-11-20 17:55:19.054241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.889 [2024-11-20 17:55:19.333225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.889 [2024-11-20 17:55:19.333285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:55.889 [2024-11-20 17:55:19.333299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 278.931 ms 00:24:55.889 [2024-11-20 17:55:19.333308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.889 [2024-11-20 17:55:19.358974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.889 [2024-11-20 17:55:19.359022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:55.889 [2024-11-20 17:55:19.359034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.648 ms 00:24:55.889 [2024-11-20 17:55:19.359043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.889 [2024-11-20 17:55:19.384011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.889 [2024-11-20 17:55:19.384192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:55.890 [2024-11-20 17:55:19.384225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.925 ms 00:24:55.890 [2024-11-20 17:55:19.384234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.890 [2024-11-20 17:55:19.408799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.890 [2024-11-20 17:55:19.408855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:55.890 [2024-11-20 17:55:19.408867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.454 ms 00:24:55.890 [2024-11-20 17:55:19.408899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.152 [2024-11-20 17:55:19.433339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.152 [2024-11-20 17:55:19.433382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:56.152 [2024-11-20 17:55:19.433395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.372 ms 00:24:56.152 [2024-11-20 17:55:19.433402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.152 [2024-11-20 17:55:19.433444] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:56.152 [2024-11-20 17:55:19.433459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:56.152 [2024-11-20 17:55:19.433470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:56.152 [2024-11-20 17:55:19.433770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.433996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:56.153 [2024-11-20 17:55:19.434306] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:56.153 [2024-11-20 17:55:19.434314] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 582e7997-38e0-40d2-a69e-d470f323bfc0 00:24:56.153 [2024-11-20 17:55:19.434323] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:56.153 [2024-11-20 17:55:19.434331] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 30144 00:24:56.153 [2024-11-20 17:55:19.434339] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 29184 00:24:56.153 [2024-11-20 17:55:19.434348] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0329 00:24:56.153 [2024-11-20 17:55:19.434356] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:56.153 [2024-11-20 17:55:19.434369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:56.153 [2024-11-20 17:55:19.434378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:56.153 [2024-11-20 17:55:19.434391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:56.153 [2024-11-20 17:55:19.434398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:56.153 [2024-11-20 17:55:19.434405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.153 [2024-11-20 17:55:19.434414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:56.153 [2024-11-20 17:55:19.434423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:24:56.153 [2024-11-20 17:55:19.434432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.153 [2024-11-20 17:55:19.447997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.153 [2024-11-20 17:55:19.448037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:56.153 [2024-11-20 17:55:19.448048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.545 ms 00:24:56.153 [2024-11-20 17:55:19.448063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.153 [2024-11-20 17:55:19.448456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.153 [2024-11-20 17:55:19.448466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:56.153 [2024-11-20 17:55:19.448476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:24:56.153 [2024-11-20 17:55:19.448482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.153 [2024-11-20 17:55:19.484711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.153 [2024-11-20 17:55:19.484763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:56.153 [2024-11-20 17:55:19.484775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.153 [2024-11-20 17:55:19.484782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.153 [2024-11-20 17:55:19.484852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.153 [2024-11-20 17:55:19.484861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:56.153 [2024-11-20 17:55:19.484892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.153 [2024-11-20 17:55:19.484902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.153 [2024-11-20 17:55:19.484969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.153 [2024-11-20 17:55:19.484980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:56.154 [2024-11-20 17:55:19.484994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.154 [2024-11-20 17:55:19.485002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.154 [2024-11-20 17:55:19.485018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.154 [2024-11-20 17:55:19.485026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:56.154 [2024-11-20 17:55:19.485035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.154 [2024-11-20 17:55:19.485043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.154 [2024-11-20 17:55:19.568682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.154 [2024-11-20 17:55:19.568739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:56.154 [2024-11-20 17:55:19.568758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.154 [2024-11-20 17:55:19.568767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.154 [2024-11-20 17:55:19.637705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.154 [2024-11-20 17:55:19.637761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:56.154 [2024-11-20 17:55:19.637774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.154 [2024-11-20 17:55:19.637783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.154 [2024-11-20 17:55:19.637866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.154 [2024-11-20 17:55:19.637905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:56.154 [2024-11-20 17:55:19.637915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.154 [2024-11-20 17:55:19.637927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.154 [2024-11-20 17:55:19.637966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.154 [2024-11-20 17:55:19.637975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:56.154 [2024-11-20 17:55:19.637984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.154 [2024-11-20 17:55:19.637992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.154 [2024-11-20 17:55:19.638088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.154 [2024-11-20 17:55:19.638099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:56.154 [2024-11-20 17:55:19.638108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.154 [2024-11-20 17:55:19.638116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.154 [2024-11-20 17:55:19.638153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.154 [2024-11-20 17:55:19.638163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:56.154 [2024-11-20 17:55:19.638171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.154 [2024-11-20 17:55:19.638180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.154 [2024-11-20 17:55:19.638222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.154 [2024-11-20 17:55:19.638232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:56.154 [2024-11-20 17:55:19.638241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.154 [2024-11-20 17:55:19.638249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.154 [2024-11-20 17:55:19.638302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.154 [2024-11-20 17:55:19.638314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:56.154 [2024-11-20 17:55:19.638322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.154 [2024-11-20 17:55:19.638331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.154 [2024-11-20 17:55:19.638463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 642.466 ms, result 0 00:24:57.098 00:24:57.098 00:24:57.098 17:55:20 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:59.647 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77360 00:24:59.647 17:55:22 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77360 ']' 00:24:59.647 17:55:22 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77360 00:24:59.647 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77360) - No such process 00:24:59.647 Process with pid 77360 is not found 00:24:59.647 Remove shared memory files 00:24:59.647 17:55:22 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77360 is not found' 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:59.647 17:55:22 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:59.647 ************************************ 00:24:59.647 END TEST ftl_restore 00:24:59.647 ************************************ 00:24:59.647 00:24:59.647 real 3m54.654s 00:24:59.647 user 3m43.678s 00:24:59.647 sys 0m11.421s 00:24:59.647 17:55:22 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:59.647 17:55:22 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:59.647 17:55:22 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:59.647 17:55:22 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:59.647 17:55:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:59.647 17:55:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:59.647 ************************************ 00:24:59.647 START TEST ftl_dirty_shutdown 00:24:59.647 ************************************ 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:59.647 * Looking for test storage... 00:24:59.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:59.647 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:59.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.648 --rc genhtml_branch_coverage=1 00:24:59.648 --rc genhtml_function_coverage=1 00:24:59.648 --rc genhtml_legend=1 00:24:59.648 --rc geninfo_all_blocks=1 00:24:59.648 --rc geninfo_unexecuted_blocks=1 00:24:59.648 00:24:59.648 ' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:59.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.648 --rc genhtml_branch_coverage=1 00:24:59.648 --rc genhtml_function_coverage=1 00:24:59.648 --rc genhtml_legend=1 00:24:59.648 --rc geninfo_all_blocks=1 00:24:59.648 --rc geninfo_unexecuted_blocks=1 00:24:59.648 00:24:59.648 ' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:59.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.648 --rc genhtml_branch_coverage=1 00:24:59.648 --rc genhtml_function_coverage=1 00:24:59.648 --rc genhtml_legend=1 00:24:59.648 --rc geninfo_all_blocks=1 00:24:59.648 --rc geninfo_unexecuted_blocks=1 00:24:59.648 00:24:59.648 ' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:59.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.648 --rc genhtml_branch_coverage=1 00:24:59.648 --rc genhtml_function_coverage=1 00:24:59.648 --rc genhtml_legend=1 00:24:59.648 --rc geninfo_all_blocks=1 00:24:59.648 --rc geninfo_unexecuted_blocks=1 00:24:59.648 00:24:59.648 ' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79891 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79891 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79891 ']' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.648 17:55:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:59.648 [2024-11-20 17:55:23.069074] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:24:59.648 [2024-11-20 17:55:23.069369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79891 ] 00:24:59.909 [2024-11-20 17:55:23.227193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.909 [2024-11-20 17:55:23.337393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.482 17:55:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.482 17:55:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:00.482 17:55:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:00.482 17:55:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:00.482 17:55:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:00.482 17:55:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:00.482 17:55:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:00.483 17:55:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:00.743 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:00.743 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:00.743 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:00.743 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:00.743 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:00.743 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:00.743 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:00.743 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:01.004 { 00:25:01.004 "name": "nvme0n1", 00:25:01.004 "aliases": [ 00:25:01.004 "2036a294-9daa-45f6-b9e5-d41dc1e103c2" 00:25:01.004 ], 00:25:01.004 "product_name": "NVMe disk", 00:25:01.004 "block_size": 4096, 00:25:01.004 "num_blocks": 1310720, 00:25:01.004 "uuid": "2036a294-9daa-45f6-b9e5-d41dc1e103c2", 00:25:01.004 "numa_id": -1, 00:25:01.004 "assigned_rate_limits": { 00:25:01.004 "rw_ios_per_sec": 0, 00:25:01.004 "rw_mbytes_per_sec": 0, 00:25:01.004 "r_mbytes_per_sec": 0, 00:25:01.004 "w_mbytes_per_sec": 0 00:25:01.004 }, 00:25:01.004 "claimed": true, 00:25:01.004 "claim_type": "read_many_write_one", 00:25:01.004 "zoned": false, 00:25:01.004 "supported_io_types": { 00:25:01.004 "read": true, 00:25:01.004 "write": true, 00:25:01.004 "unmap": true, 00:25:01.004 "flush": true, 00:25:01.004 "reset": true, 00:25:01.004 "nvme_admin": true, 00:25:01.004 "nvme_io": true, 00:25:01.004 "nvme_io_md": false, 00:25:01.004 "write_zeroes": true, 00:25:01.004 "zcopy": false, 00:25:01.004 "get_zone_info": false, 00:25:01.004 "zone_management": false, 00:25:01.004 "zone_append": false, 00:25:01.004 "compare": true, 00:25:01.004 "compare_and_write": false, 00:25:01.004 "abort": true, 00:25:01.004 "seek_hole": false, 00:25:01.004 "seek_data": false, 00:25:01.004 "copy": true, 00:25:01.004 "nvme_iov_md": false 00:25:01.004 }, 00:25:01.004 "driver_specific": { 00:25:01.004 "nvme": [ 00:25:01.004 { 00:25:01.004 "pci_address": "0000:00:11.0", 00:25:01.004 "trid": { 00:25:01.004 "trtype": "PCIe", 00:25:01.004 "traddr": "0000:00:11.0" 00:25:01.004 }, 00:25:01.004 "ctrlr_data": { 00:25:01.004 "cntlid": 0, 00:25:01.004 "vendor_id": "0x1b36", 00:25:01.004 "model_number": "QEMU NVMe Ctrl", 00:25:01.004 "serial_number": "12341", 00:25:01.004 "firmware_revision": "8.0.0", 00:25:01.004 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:01.004 "oacs": { 00:25:01.004 "security": 0, 00:25:01.004 "format": 1, 00:25:01.004 "firmware": 0, 00:25:01.004 "ns_manage": 1 00:25:01.004 }, 00:25:01.004 "multi_ctrlr": false, 00:25:01.004 "ana_reporting": false 00:25:01.004 }, 00:25:01.004 "vs": { 00:25:01.004 "nvme_version": "1.4" 00:25:01.004 }, 00:25:01.004 "ns_data": { 00:25:01.004 "id": 1, 00:25:01.004 "can_share": false 00:25:01.004 } 00:25:01.004 } 00:25:01.004 ], 00:25:01.004 "mp_policy": "active_passive" 00:25:01.004 } 00:25:01.004 } 00:25:01.004 ]' 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:01.004 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:01.264 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b5620548-bf19-40fc-a0ae-a2bfbab7082f 00:25:01.264 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:01.264 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5620548-bf19-40fc-a0ae-a2bfbab7082f 00:25:01.525 17:55:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:01.786 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=f2480f98-15db-4616-aeff-a420225768db 00:25:01.786 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f2480f98-15db-4616-aeff-a420225768db 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:02.047 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:02.309 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:02.309 { 00:25:02.309 "name": "d1199de8-66e2-48fa-8fc2-d5bf5323fced", 00:25:02.309 "aliases": [ 00:25:02.309 "lvs/nvme0n1p0" 00:25:02.309 ], 00:25:02.309 "product_name": "Logical Volume", 00:25:02.309 "block_size": 4096, 00:25:02.309 "num_blocks": 26476544, 00:25:02.309 "uuid": "d1199de8-66e2-48fa-8fc2-d5bf5323fced", 00:25:02.309 "assigned_rate_limits": { 00:25:02.309 "rw_ios_per_sec": 0, 00:25:02.309 "rw_mbytes_per_sec": 0, 00:25:02.309 "r_mbytes_per_sec": 0, 00:25:02.309 "w_mbytes_per_sec": 0 00:25:02.309 }, 00:25:02.309 "claimed": false, 00:25:02.309 "zoned": false, 00:25:02.309 "supported_io_types": { 00:25:02.309 "read": true, 00:25:02.309 "write": true, 00:25:02.309 "unmap": true, 00:25:02.309 "flush": false, 00:25:02.309 "reset": true, 00:25:02.309 "nvme_admin": false, 00:25:02.309 "nvme_io": false, 00:25:02.309 "nvme_io_md": false, 00:25:02.309 "write_zeroes": true, 00:25:02.309 "zcopy": false, 00:25:02.309 "get_zone_info": false, 00:25:02.309 "zone_management": false, 00:25:02.309 "zone_append": false, 00:25:02.309 "compare": false, 00:25:02.309 "compare_and_write": false, 00:25:02.309 "abort": false, 00:25:02.309 "seek_hole": true, 00:25:02.309 "seek_data": true, 00:25:02.309 "copy": false, 00:25:02.309 "nvme_iov_md": false 00:25:02.309 }, 00:25:02.309 "driver_specific": { 00:25:02.309 "lvol": { 00:25:02.309 "lvol_store_uuid": "f2480f98-15db-4616-aeff-a420225768db", 00:25:02.309 "base_bdev": "nvme0n1", 00:25:02.309 "thin_provision": true, 00:25:02.309 "num_allocated_clusters": 0, 00:25:02.309 "snapshot": false, 00:25:02.309 "clone": false, 00:25:02.309 "esnap_clone": false 00:25:02.309 } 00:25:02.309 } 00:25:02.309 } 00:25:02.309 ]' 00:25:02.309 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:02.309 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:02.310 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:02.310 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:02.310 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:02.310 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:02.310 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:02.310 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:02.310 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:02.571 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:02.571 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:02.571 17:55:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:02.571 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:02.571 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:02.571 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:02.571 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:02.571 17:55:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:02.571 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:02.571 { 00:25:02.571 "name": "d1199de8-66e2-48fa-8fc2-d5bf5323fced", 00:25:02.571 "aliases": [ 00:25:02.571 "lvs/nvme0n1p0" 00:25:02.571 ], 00:25:02.571 "product_name": "Logical Volume", 00:25:02.571 "block_size": 4096, 00:25:02.571 "num_blocks": 26476544, 00:25:02.571 "uuid": "d1199de8-66e2-48fa-8fc2-d5bf5323fced", 00:25:02.571 "assigned_rate_limits": { 00:25:02.571 "rw_ios_per_sec": 0, 00:25:02.571 "rw_mbytes_per_sec": 0, 00:25:02.571 "r_mbytes_per_sec": 0, 00:25:02.571 "w_mbytes_per_sec": 0 00:25:02.571 }, 00:25:02.571 "claimed": false, 00:25:02.571 "zoned": false, 00:25:02.571 "supported_io_types": { 00:25:02.571 "read": true, 00:25:02.571 "write": true, 00:25:02.571 "unmap": true, 00:25:02.571 "flush": false, 00:25:02.571 "reset": true, 00:25:02.571 "nvme_admin": false, 00:25:02.571 "nvme_io": false, 00:25:02.571 "nvme_io_md": false, 00:25:02.571 "write_zeroes": true, 00:25:02.571 "zcopy": false, 00:25:02.571 "get_zone_info": false, 00:25:02.571 "zone_management": false, 00:25:02.571 "zone_append": false, 00:25:02.571 "compare": false, 00:25:02.571 "compare_and_write": false, 00:25:02.571 "abort": false, 00:25:02.571 "seek_hole": true, 00:25:02.571 "seek_data": true, 00:25:02.571 "copy": false, 00:25:02.571 "nvme_iov_md": false 00:25:02.571 }, 00:25:02.571 "driver_specific": { 00:25:02.571 "lvol": { 00:25:02.571 "lvol_store_uuid": "f2480f98-15db-4616-aeff-a420225768db", 00:25:02.571 "base_bdev": "nvme0n1", 00:25:02.571 "thin_provision": true, 00:25:02.571 "num_allocated_clusters": 0, 00:25:02.571 "snapshot": false, 00:25:02.571 "clone": false, 00:25:02.571 "esnap_clone": false 00:25:02.571 } 00:25:02.571 } 00:25:02.571 } 00:25:02.571 ]' 00:25:02.571 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:02.571 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:02.571 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:02.832 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d1199de8-66e2-48fa-8fc2-d5bf5323fced 00:25:03.093 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:03.093 { 00:25:03.093 "name": "d1199de8-66e2-48fa-8fc2-d5bf5323fced", 00:25:03.093 "aliases": [ 00:25:03.093 "lvs/nvme0n1p0" 00:25:03.093 ], 00:25:03.093 "product_name": "Logical Volume", 00:25:03.093 "block_size": 4096, 00:25:03.093 "num_blocks": 26476544, 00:25:03.093 "uuid": "d1199de8-66e2-48fa-8fc2-d5bf5323fced", 00:25:03.093 "assigned_rate_limits": { 00:25:03.093 "rw_ios_per_sec": 0, 00:25:03.093 "rw_mbytes_per_sec": 0, 00:25:03.093 "r_mbytes_per_sec": 0, 00:25:03.093 "w_mbytes_per_sec": 0 00:25:03.093 }, 00:25:03.093 "claimed": false, 00:25:03.093 "zoned": false, 00:25:03.093 "supported_io_types": { 00:25:03.093 "read": true, 00:25:03.093 "write": true, 00:25:03.093 "unmap": true, 00:25:03.093 "flush": false, 00:25:03.093 "reset": true, 00:25:03.093 "nvme_admin": false, 00:25:03.093 "nvme_io": false, 00:25:03.093 "nvme_io_md": false, 00:25:03.093 "write_zeroes": true, 00:25:03.093 "zcopy": false, 00:25:03.093 "get_zone_info": false, 00:25:03.093 "zone_management": false, 00:25:03.093 "zone_append": false, 00:25:03.093 "compare": false, 00:25:03.093 "compare_and_write": false, 00:25:03.093 "abort": false, 00:25:03.093 "seek_hole": true, 00:25:03.093 "seek_data": true, 00:25:03.093 "copy": false, 00:25:03.093 "nvme_iov_md": false 00:25:03.093 }, 00:25:03.093 "driver_specific": { 00:25:03.093 "lvol": { 00:25:03.093 "lvol_store_uuid": "f2480f98-15db-4616-aeff-a420225768db", 00:25:03.093 "base_bdev": "nvme0n1", 00:25:03.093 "thin_provision": true, 00:25:03.093 "num_allocated_clusters": 0, 00:25:03.093 "snapshot": false, 00:25:03.093 "clone": false, 00:25:03.093 "esnap_clone": false 00:25:03.093 } 00:25:03.093 } 00:25:03.093 } 00:25:03.093 ]' 00:25:03.093 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:03.094 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:03.094 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:03.094 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:03.094 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:03.094 17:55:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:03.094 17:55:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:03.094 17:55:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d1199de8-66e2-48fa-8fc2-d5bf5323fced --l2p_dram_limit 10' 00:25:03.094 17:55:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:03.094 17:55:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:03.094 17:55:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:03.094 17:55:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d1199de8-66e2-48fa-8fc2-d5bf5323fced --l2p_dram_limit 10 -c nvc0n1p0 00:25:03.355 [2024-11-20 17:55:26.745052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.355 [2024-11-20 17:55:26.745093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:03.355 [2024-11-20 17:55:26.745105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:03.355 [2024-11-20 17:55:26.745112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.355 [2024-11-20 17:55:26.745161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.356 [2024-11-20 17:55:26.745169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:03.356 [2024-11-20 17:55:26.745177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:03.356 [2024-11-20 17:55:26.745183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.356 [2024-11-20 17:55:26.745199] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:03.356 [2024-11-20 17:55:26.745805] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:03.356 [2024-11-20 17:55:26.745820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.356 [2024-11-20 17:55:26.745826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:03.356 [2024-11-20 17:55:26.745835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:25:03.356 [2024-11-20 17:55:26.745840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.356 [2024-11-20 17:55:26.745867] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9d9b3bbe-d8c2-4135-b83e-ca00f5a592b5 00:25:03.356 [2024-11-20 17:55:26.746880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.356 [2024-11-20 17:55:26.746902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:03.356 [2024-11-20 17:55:26.746910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:03.356 [2024-11-20 17:55:26.746918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.356 [2024-11-20 17:55:26.751652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.356 [2024-11-20 17:55:26.751684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:03.356 [2024-11-20 17:55:26.751692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:25:03.356 [2024-11-20 17:55:26.751700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.356 [2024-11-20 17:55:26.751767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.356 [2024-11-20 17:55:26.751776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:03.356 [2024-11-20 17:55:26.751783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:03.356 [2024-11-20 17:55:26.751793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.356 [2024-11-20 17:55:26.751821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.356 [2024-11-20 17:55:26.751829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:03.356 [2024-11-20 17:55:26.751836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:03.356 [2024-11-20 17:55:26.751844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.356 [2024-11-20 17:55:26.751860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:03.356 [2024-11-20 17:55:26.754721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.356 [2024-11-20 17:55:26.754749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:03.356 [2024-11-20 17:55:26.754765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:25:03.356 [2024-11-20 17:55:26.754772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.356 [2024-11-20 17:55:26.754798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.356 [2024-11-20 17:55:26.754804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:03.356 [2024-11-20 17:55:26.754811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:03.356 [2024-11-20 17:55:26.754817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.356 [2024-11-20 17:55:26.754843] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:03.356 [2024-11-20 17:55:26.754955] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:03.356 [2024-11-20 17:55:26.754968] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:03.356 [2024-11-20 17:55:26.754976] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:03.356 [2024-11-20 17:55:26.754985] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:03.356 [2024-11-20 17:55:26.754992] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:03.356 [2024-11-20 17:55:26.754999] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:03.356 [2024-11-20 17:55:26.755005] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:03.356 [2024-11-20 17:55:26.755014] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:03.356 [2024-11-20 17:55:26.755019] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:03.356 [2024-11-20 17:55:26.755026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.356 [2024-11-20 17:55:26.755032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:03.356 [2024-11-20 17:55:26.755039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:25:03.356 [2024-11-20 17:55:26.755050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.356 [2024-11-20 17:55:26.755115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.356 [2024-11-20 17:55:26.755121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:03.356 [2024-11-20 17:55:26.755128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:03.356 [2024-11-20 17:55:26.755133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.356 [2024-11-20 17:55:26.755211] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:03.356 [2024-11-20 17:55:26.755218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:03.356 [2024-11-20 17:55:26.755225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.356 [2024-11-20 17:55:26.755231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:03.356 [2024-11-20 17:55:26.755243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:03.356 [2024-11-20 17:55:26.755255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:03.356 [2024-11-20 17:55:26.755261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.356 [2024-11-20 17:55:26.755273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:03.356 [2024-11-20 17:55:26.755278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:03.356 [2024-11-20 17:55:26.755284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.356 [2024-11-20 17:55:26.755291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:03.356 [2024-11-20 17:55:26.755298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:03.356 [2024-11-20 17:55:26.755304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:03.356 [2024-11-20 17:55:26.755317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:03.356 [2024-11-20 17:55:26.755324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:03.356 [2024-11-20 17:55:26.755335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.356 [2024-11-20 17:55:26.755346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:03.356 [2024-11-20 17:55:26.755351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.356 [2024-11-20 17:55:26.755362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:03.356 [2024-11-20 17:55:26.755368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.356 [2024-11-20 17:55:26.755379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:03.356 [2024-11-20 17:55:26.755384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.356 [2024-11-20 17:55:26.755396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:03.356 [2024-11-20 17:55:26.755403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.356 [2024-11-20 17:55:26.755414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:03.356 [2024-11-20 17:55:26.755419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:03.356 [2024-11-20 17:55:26.755425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.356 [2024-11-20 17:55:26.755430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:03.356 [2024-11-20 17:55:26.755436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:03.356 [2024-11-20 17:55:26.755441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:03.356 [2024-11-20 17:55:26.755452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:03.356 [2024-11-20 17:55:26.755458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755463] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:03.356 [2024-11-20 17:55:26.755471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:03.356 [2024-11-20 17:55:26.755477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.356 [2024-11-20 17:55:26.755483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.356 [2024-11-20 17:55:26.755490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:03.356 [2024-11-20 17:55:26.755498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:03.356 [2024-11-20 17:55:26.755503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:03.356 [2024-11-20 17:55:26.755509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:03.357 [2024-11-20 17:55:26.755514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:03.357 [2024-11-20 17:55:26.755521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:03.357 [2024-11-20 17:55:26.755528] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:03.357 [2024-11-20 17:55:26.755536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.357 [2024-11-20 17:55:26.755544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:03.357 [2024-11-20 17:55:26.755550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:03.357 [2024-11-20 17:55:26.755555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:03.357 [2024-11-20 17:55:26.755562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:03.357 [2024-11-20 17:55:26.755567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:03.357 [2024-11-20 17:55:26.755573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:03.357 [2024-11-20 17:55:26.755578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:03.357 [2024-11-20 17:55:26.755584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:03.357 [2024-11-20 17:55:26.755590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:03.357 [2024-11-20 17:55:26.755598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:03.357 [2024-11-20 17:55:26.755603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:03.357 [2024-11-20 17:55:26.755610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:03.357 [2024-11-20 17:55:26.755616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:03.357 [2024-11-20 17:55:26.755623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:03.357 [2024-11-20 17:55:26.755628] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:03.357 [2024-11-20 17:55:26.755635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.357 [2024-11-20 17:55:26.755641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:03.357 [2024-11-20 17:55:26.755647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:03.357 [2024-11-20 17:55:26.755653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:03.357 [2024-11-20 17:55:26.755660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:03.357 [2024-11-20 17:55:26.755666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.357 [2024-11-20 17:55:26.755673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:03.357 [2024-11-20 17:55:26.755679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:25:03.357 [2024-11-20 17:55:26.755686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.357 [2024-11-20 17:55:26.755714] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:03.357 [2024-11-20 17:55:26.755723] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:07.563 [2024-11-20 17:55:30.345500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.563 [2024-11-20 17:55:30.345780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:07.563 [2024-11-20 17:55:30.345905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3589.773 ms 00:25:07.563 [2024-11-20 17:55:30.345940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.563 [2024-11-20 17:55:30.375788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.563 [2024-11-20 17:55:30.376028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:07.563 [2024-11-20 17:55:30.376111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.543 ms 00:25:07.563 [2024-11-20 17:55:30.376140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.563 [2024-11-20 17:55:30.376295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.563 [2024-11-20 17:55:30.376378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:07.563 [2024-11-20 17:55:30.376404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:07.563 [2024-11-20 17:55:30.376432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.563 [2024-11-20 17:55:30.410660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.563 [2024-11-20 17:55:30.410867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:07.563 [2024-11-20 17:55:30.410962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.090 ms 00:25:07.563 [2024-11-20 17:55:30.410990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.411049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.411078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:07.564 [2024-11-20 17:55:30.411099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:07.564 [2024-11-20 17:55:30.411279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.411862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.412059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:07.564 [2024-11-20 17:55:30.412116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:25:07.564 [2024-11-20 17:55:30.412142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.412272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.412297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:07.564 [2024-11-20 17:55:30.412364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:07.564 [2024-11-20 17:55:30.412393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.429772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.430031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:07.564 [2024-11-20 17:55:30.430103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.344 ms 00:25:07.564 [2024-11-20 17:55:30.430118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.455332] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:07.564 [2024-11-20 17:55:30.459678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.459859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:07.564 [2024-11-20 17:55:30.460114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.460 ms 00:25:07.564 [2024-11-20 17:55:30.460150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.556478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.556733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:07.564 [2024-11-20 17:55:30.556806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.257 ms 00:25:07.564 [2024-11-20 17:55:30.556831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.557064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.557189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:07.564 [2024-11-20 17:55:30.557231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:25:07.564 [2024-11-20 17:55:30.557252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.584122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.584307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:07.564 [2024-11-20 17:55:30.584377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.794 ms 00:25:07.564 [2024-11-20 17:55:30.584401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.610834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.611027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:07.564 [2024-11-20 17:55:30.611108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.229 ms 00:25:07.564 [2024-11-20 17:55:30.611133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.611796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.611964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:07.564 [2024-11-20 17:55:30.612033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:25:07.564 [2024-11-20 17:55:30.612058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.699195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.699380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:07.564 [2024-11-20 17:55:30.699454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.035 ms 00:25:07.564 [2024-11-20 17:55:30.699466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.728197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.728249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:07.564 [2024-11-20 17:55:30.728266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.591 ms 00:25:07.564 [2024-11-20 17:55:30.728274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.755477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.755526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:07.564 [2024-11-20 17:55:30.755542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.141 ms 00:25:07.564 [2024-11-20 17:55:30.755550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.783177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.783231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:07.564 [2024-11-20 17:55:30.783248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.564 ms 00:25:07.564 [2024-11-20 17:55:30.783255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.783315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.783326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:07.564 [2024-11-20 17:55:30.783341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:07.564 [2024-11-20 17:55:30.783349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.783462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.564 [2024-11-20 17:55:30.783473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:07.564 [2024-11-20 17:55:30.783488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:07.564 [2024-11-20 17:55:30.783495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.564 [2024-11-20 17:55:30.784699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4039.129 ms, result 0 00:25:07.564 { 00:25:07.564 "name": "ftl0", 00:25:07.564 "uuid": "9d9b3bbe-d8c2-4135-b83e-ca00f5a592b5" 00:25:07.564 } 00:25:07.564 17:55:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:07.564 17:55:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:07.564 17:55:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:07.564 17:55:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:07.564 17:55:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:07.825 /dev/nbd0 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:07.825 1+0 records in 00:25:07.825 1+0 records out 00:25:07.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309681 s, 13.2 MB/s 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:07.825 17:55:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:07.825 [2024-11-20 17:55:31.346336] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:25:07.825 [2024-11-20 17:55:31.346473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80033 ] 00:25:08.086 [2024-11-20 17:55:31.507073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.347 [2024-11-20 17:55:31.624280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.722  [2024-11-20T17:55:34.223Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-20T17:55:35.161Z] Copying: 391/1024 [MB] (197 MBps) [2024-11-20T17:55:36.097Z] Copying: 613/1024 [MB] (221 MBps) [2024-11-20T17:55:36.664Z] Copying: 868/1024 [MB] (255 MBps) [2024-11-20T17:55:37.232Z] Copying: 1024/1024 [MB] (average 222 MBps) 00:25:13.692 00:25:13.692 17:55:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:16.228 17:55:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:16.228 [2024-11-20 17:55:39.200395] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:25:16.228 [2024-11-20 17:55:39.200480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80120 ] 00:25:16.228 [2024-11-20 17:55:39.353764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.228 [2024-11-20 17:55:39.446073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.162  [2024-11-20T17:55:42.077Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-20T17:55:43.010Z] Copying: 41/1024 [MB] (24 MBps) [2024-11-20T17:55:43.944Z] Copying: 70/1024 [MB] (28 MBps) [2024-11-20T17:55:44.878Z] Copying: 100/1024 [MB] (29 MBps) [2024-11-20T17:55:45.812Z] Copying: 131/1024 [MB] (31 MBps) [2024-11-20T17:55:46.752Z] Copying: 167/1024 [MB] (36 MBps) [2024-11-20T17:55:47.845Z] Copying: 201/1024 [MB] (34 MBps) [2024-11-20T17:55:48.790Z] Copying: 233/1024 [MB] (31 MBps) [2024-11-20T17:55:49.731Z] Copying: 257/1024 [MB] (24 MBps) [2024-11-20T17:55:50.674Z] Copying: 285/1024 [MB] (27 MBps) [2024-11-20T17:55:52.062Z] Copying: 303/1024 [MB] (17 MBps) [2024-11-20T17:55:53.007Z] Copying: 326/1024 [MB] (22 MBps) [2024-11-20T17:55:53.951Z] Copying: 348/1024 [MB] (22 MBps) [2024-11-20T17:55:54.894Z] Copying: 377/1024 [MB] (29 MBps) [2024-11-20T17:55:55.837Z] Copying: 397/1024 [MB] (19 MBps) [2024-11-20T17:55:56.780Z] Copying: 422/1024 [MB] (24 MBps) [2024-11-20T17:55:57.724Z] Copying: 447/1024 [MB] (24 MBps) [2024-11-20T17:55:58.667Z] Copying: 478/1024 [MB] (30 MBps) [2024-11-20T17:56:00.050Z] Copying: 509/1024 [MB] (31 MBps) [2024-11-20T17:56:01.013Z] Copying: 529/1024 [MB] (20 MBps) [2024-11-20T17:56:01.956Z] Copying: 550/1024 [MB] (20 MBps) [2024-11-20T17:56:02.898Z] Copying: 581/1024 [MB] (31 MBps) [2024-11-20T17:56:03.841Z] Copying: 607/1024 [MB] (26 MBps) [2024-11-20T17:56:04.783Z] Copying: 630/1024 [MB] (22 MBps) [2024-11-20T17:56:05.727Z] Copying: 660/1024 [MB] (30 MBps) [2024-11-20T17:56:06.670Z] Copying: 678/1024 [MB] (17 MBps) [2024-11-20T17:56:08.057Z] Copying: 707/1024 [MB] (28 MBps) [2024-11-20T17:56:09.002Z] Copying: 731/1024 [MB] (23 MBps) [2024-11-20T17:56:09.946Z] Copying: 755/1024 [MB] (24 MBps) [2024-11-20T17:56:10.887Z] Copying: 786/1024 [MB] (30 MBps) [2024-11-20T17:56:11.830Z] Copying: 809/1024 [MB] (22 MBps) [2024-11-20T17:56:12.773Z] Copying: 834/1024 [MB] (25 MBps) [2024-11-20T17:56:13.718Z] Copying: 853/1024 [MB] (19 MBps) [2024-11-20T17:56:15.105Z] Copying: 880/1024 [MB] (27 MBps) [2024-11-20T17:56:15.678Z] Copying: 907/1024 [MB] (26 MBps) [2024-11-20T17:56:17.066Z] Copying: 929/1024 [MB] (22 MBps) [2024-11-20T17:56:18.022Z] Copying: 952/1024 [MB] (23 MBps) [2024-11-20T17:56:18.964Z] Copying: 980/1024 [MB] (27 MBps) [2024-11-20T17:56:19.537Z] Copying: 1004/1024 [MB] (23 MBps) [2024-11-20T17:56:20.108Z] Copying: 1024/1024 [MB] (average 25 MBps) 00:25:56.568 00:25:56.568 17:56:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:56.568 17:56:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:56.828 17:56:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:56.829 [2024-11-20 17:56:20.309815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.829 [2024-11-20 17:56:20.309856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:56.829 [2024-11-20 17:56:20.309868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:56.829 [2024-11-20 17:56:20.309888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.829 [2024-11-20 17:56:20.309908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:56.829 [2024-11-20 17:56:20.312090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.829 [2024-11-20 17:56:20.312115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:56.829 [2024-11-20 17:56:20.312126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:25:56.829 [2024-11-20 17:56:20.312133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.829 [2024-11-20 17:56:20.314128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.829 [2024-11-20 17:56:20.314154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:56.829 [2024-11-20 17:56:20.314163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.971 ms 00:25:56.829 [2024-11-20 17:56:20.314170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.829 [2024-11-20 17:56:20.327774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.829 [2024-11-20 17:56:20.327803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:56.829 [2024-11-20 17:56:20.327813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.587 ms 00:25:56.829 [2024-11-20 17:56:20.327820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.829 [2024-11-20 17:56:20.332560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.829 [2024-11-20 17:56:20.332584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:56.829 [2024-11-20 17:56:20.332594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.712 ms 00:25:56.829 [2024-11-20 17:56:20.332601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.829 [2024-11-20 17:56:20.350793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.829 [2024-11-20 17:56:20.350829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:56.829 [2024-11-20 17:56:20.350839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.109 ms 00:25:56.829 [2024-11-20 17:56:20.350845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.829 [2024-11-20 17:56:20.362819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.829 [2024-11-20 17:56:20.362954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:56.829 [2024-11-20 17:56:20.362972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.924 ms 00:25:56.829 [2024-11-20 17:56:20.362980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.829 [2024-11-20 17:56:20.363093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.829 [2024-11-20 17:56:20.363101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:56.829 [2024-11-20 17:56:20.363110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:56.829 [2024-11-20 17:56:20.363116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-11-20 17:56:20.381277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-11-20 17:56:20.381302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:57.092 [2024-11-20 17:56:20.381311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.146 ms 00:25:57.092 [2024-11-20 17:56:20.381317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-11-20 17:56:20.398613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-11-20 17:56:20.398715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:57.092 [2024-11-20 17:56:20.398731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.266 ms 00:25:57.092 [2024-11-20 17:56:20.398736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-11-20 17:56:20.415814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-11-20 17:56:20.415838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:57.092 [2024-11-20 17:56:20.415847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.050 ms 00:25:57.092 [2024-11-20 17:56:20.415853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-11-20 17:56:20.433317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.092 [2024-11-20 17:56:20.433407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:57.092 [2024-11-20 17:56:20.433422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.397 ms 00:25:57.092 [2024-11-20 17:56:20.433427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.092 [2024-11-20 17:56:20.433453] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:57.092 [2024-11-20 17:56:20.433463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:57.092 [2024-11-20 17:56:20.433807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.433994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:57.093 [2024-11-20 17:56:20.434147] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:57.093 [2024-11-20 17:56:20.434155] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d9b3bbe-d8c2-4135-b83e-ca00f5a592b5 00:25:57.093 [2024-11-20 17:56:20.434161] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:57.093 [2024-11-20 17:56:20.434169] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:57.093 [2024-11-20 17:56:20.434174] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:57.093 [2024-11-20 17:56:20.434184] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:57.093 [2024-11-20 17:56:20.434189] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:57.093 [2024-11-20 17:56:20.434196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:57.093 [2024-11-20 17:56:20.434201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:57.093 [2024-11-20 17:56:20.434207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:57.093 [2024-11-20 17:56:20.434212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:57.093 [2024-11-20 17:56:20.434218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-11-20 17:56:20.434224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:57.093 [2024-11-20 17:56:20.434232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:25:57.093 [2024-11-20 17:56:20.434238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-11-20 17:56:20.443864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-11-20 17:56:20.443894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:57.093 [2024-11-20 17:56:20.443904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.574 ms 00:25:57.093 [2024-11-20 17:56:20.443909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-11-20 17:56:20.444178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.093 [2024-11-20 17:56:20.444185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:57.093 [2024-11-20 17:56:20.444192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:25:57.093 [2024-11-20 17:56:20.444198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-11-20 17:56:20.477615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.093 [2024-11-20 17:56:20.477642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.093 [2024-11-20 17:56:20.477655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.093 [2024-11-20 17:56:20.477660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-11-20 17:56:20.477704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.093 [2024-11-20 17:56:20.477710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.093 [2024-11-20 17:56:20.477718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.093 [2024-11-20 17:56:20.477723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-11-20 17:56:20.477773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.093 [2024-11-20 17:56:20.477781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.093 [2024-11-20 17:56:20.477789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.093 [2024-11-20 17:56:20.477794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-11-20 17:56:20.477810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.093 [2024-11-20 17:56:20.477815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.093 [2024-11-20 17:56:20.477823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.093 [2024-11-20 17:56:20.477828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-11-20 17:56:20.537336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.093 [2024-11-20 17:56:20.537370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.093 [2024-11-20 17:56:20.537380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.093 [2024-11-20 17:56:20.537385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-11-20 17:56:20.585863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.093 [2024-11-20 17:56:20.585901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.093 [2024-11-20 17:56:20.585911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.093 [2024-11-20 17:56:20.585917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-11-20 17:56:20.585998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.093 [2024-11-20 17:56:20.586006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.093 [2024-11-20 17:56:20.586014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.093 [2024-11-20 17:56:20.586022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.093 [2024-11-20 17:56:20.586060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.094 [2024-11-20 17:56:20.586067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.094 [2024-11-20 17:56:20.586074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.094 [2024-11-20 17:56:20.586080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-20 17:56:20.586147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.094 [2024-11-20 17:56:20.586154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.094 [2024-11-20 17:56:20.586161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.094 [2024-11-20 17:56:20.586169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-20 17:56:20.586194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.094 [2024-11-20 17:56:20.586201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:57.094 [2024-11-20 17:56:20.586208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.094 [2024-11-20 17:56:20.586214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-20 17:56:20.586242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.094 [2024-11-20 17:56:20.586248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.094 [2024-11-20 17:56:20.586256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.094 [2024-11-20 17:56:20.586262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-20 17:56:20.586300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.094 [2024-11-20 17:56:20.586307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.094 [2024-11-20 17:56:20.586315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.094 [2024-11-20 17:56:20.586320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-20 17:56:20.586425] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.578 ms, result 0 00:25:57.094 true 00:25:57.094 17:56:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79891 00:25:57.094 17:56:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79891 00:25:57.094 17:56:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:57.355 [2024-11-20 17:56:20.676622] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:25:57.355 [2024-11-20 17:56:20.676739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80544 ] 00:25:57.355 [2024-11-20 17:56:20.831779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.616 [2024-11-20 17:56:20.911426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.558  [2024-11-20T17:56:23.482Z] Copying: 253/1024 [MB] (253 MBps) [2024-11-20T17:56:24.425Z] Copying: 509/1024 [MB] (255 MBps) [2024-11-20T17:56:25.368Z] Copying: 761/1024 [MB] (252 MBps) [2024-11-20T17:56:25.368Z] Copying: 1016/1024 [MB] (255 MBps) [2024-11-20T17:56:25.940Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:26:02.400 00:26:02.400 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79891 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:02.400 17:56:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:02.400 [2024-11-20 17:56:25.745786] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:26:02.400 [2024-11-20 17:56:25.746017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80603 ] 00:26:02.400 [2024-11-20 17:56:25.893395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.661 [2024-11-20 17:56:25.968403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.661 [2024-11-20 17:56:26.179062] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:02.661 [2024-11-20 17:56:26.179111] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:02.922 [2024-11-20 17:56:26.241684] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:02.922 [2024-11-20 17:56:26.241957] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:02.922 [2024-11-20 17:56:26.242159] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:02.922 [2024-11-20 17:56:26.456954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.922 [2024-11-20 17:56:26.457076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:02.922 [2024-11-20 17:56:26.457091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:02.922 [2024-11-20 17:56:26.457097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.922 [2024-11-20 17:56:26.457143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.922 [2024-11-20 17:56:26.457154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:02.922 [2024-11-20 17:56:26.457161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:02.922 [2024-11-20 17:56:26.457166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.922 [2024-11-20 17:56:26.457182] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:02.922 [2024-11-20 17:56:26.457736] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:02.922 [2024-11-20 17:56:26.457748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.922 [2024-11-20 17:56:26.457754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:02.922 [2024-11-20 17:56:26.457760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:26:02.922 [2024-11-20 17:56:26.457766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.922 [2024-11-20 17:56:26.458717] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:03.184 [2024-11-20 17:56:26.468476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.184 [2024-11-20 17:56:26.468592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:03.184 [2024-11-20 17:56:26.468604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.761 ms 00:26:03.184 [2024-11-20 17:56:26.468610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.184 [2024-11-20 17:56:26.468650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.184 [2024-11-20 17:56:26.468658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:03.184 [2024-11-20 17:56:26.468664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:03.184 [2024-11-20 17:56:26.468669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.184 [2024-11-20 17:56:26.473052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.184 [2024-11-20 17:56:26.473077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:03.184 [2024-11-20 17:56:26.473084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.344 ms 00:26:03.184 [2024-11-20 17:56:26.473090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.184 [2024-11-20 17:56:26.473143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.184 [2024-11-20 17:56:26.473149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:03.184 [2024-11-20 17:56:26.473155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:03.184 [2024-11-20 17:56:26.473160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.184 [2024-11-20 17:56:26.473198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.184 [2024-11-20 17:56:26.473206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:03.184 [2024-11-20 17:56:26.473212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:03.184 [2024-11-20 17:56:26.473217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.184 [2024-11-20 17:56:26.473231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:03.184 [2024-11-20 17:56:26.475977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.184 [2024-11-20 17:56:26.475998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:03.184 [2024-11-20 17:56:26.476005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.749 ms 00:26:03.184 [2024-11-20 17:56:26.476011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.184 [2024-11-20 17:56:26.476035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.184 [2024-11-20 17:56:26.476042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:03.184 [2024-11-20 17:56:26.476048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:03.184 [2024-11-20 17:56:26.476054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.184 [2024-11-20 17:56:26.476068] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:03.184 [2024-11-20 17:56:26.476082] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:03.184 [2024-11-20 17:56:26.476109] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:03.184 [2024-11-20 17:56:26.476121] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:03.184 [2024-11-20 17:56:26.476198] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:03.184 [2024-11-20 17:56:26.476206] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:03.184 [2024-11-20 17:56:26.476214] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:03.184 [2024-11-20 17:56:26.476222] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:03.184 [2024-11-20 17:56:26.476231] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:03.184 [2024-11-20 17:56:26.476237] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:03.184 [2024-11-20 17:56:26.476243] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:03.184 [2024-11-20 17:56:26.476248] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:03.184 [2024-11-20 17:56:26.476253] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:03.184 [2024-11-20 17:56:26.476259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.184 [2024-11-20 17:56:26.476264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:03.184 [2024-11-20 17:56:26.476270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:26:03.184 [2024-11-20 17:56:26.476275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.184 [2024-11-20 17:56:26.476337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.184 [2024-11-20 17:56:26.476345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:03.184 [2024-11-20 17:56:26.476351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:03.184 [2024-11-20 17:56:26.476356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.184 [2024-11-20 17:56:26.476430] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:03.184 [2024-11-20 17:56:26.476438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:03.184 [2024-11-20 17:56:26.476444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:03.184 [2024-11-20 17:56:26.476449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.184 [2024-11-20 17:56:26.476455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:03.184 [2024-11-20 17:56:26.476460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:03.184 [2024-11-20 17:56:26.476465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:03.184 [2024-11-20 17:56:26.476471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:03.184 [2024-11-20 17:56:26.476476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:03.184 [2024-11-20 17:56:26.476481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:03.184 [2024-11-20 17:56:26.476486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:03.184 [2024-11-20 17:56:26.476495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:03.184 [2024-11-20 17:56:26.476500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:03.184 [2024-11-20 17:56:26.476506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:03.184 [2024-11-20 17:56:26.476511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:03.184 [2024-11-20 17:56:26.476516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.184 [2024-11-20 17:56:26.476521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:03.185 [2024-11-20 17:56:26.476526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:03.185 [2024-11-20 17:56:26.476530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.185 [2024-11-20 17:56:26.476536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:03.185 [2024-11-20 17:56:26.476541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:03.185 [2024-11-20 17:56:26.476546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.185 [2024-11-20 17:56:26.476551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:03.185 [2024-11-20 17:56:26.476556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:03.185 [2024-11-20 17:56:26.476560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.185 [2024-11-20 17:56:26.476565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:03.185 [2024-11-20 17:56:26.476570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:03.185 [2024-11-20 17:56:26.476574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.185 [2024-11-20 17:56:26.476579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:03.185 [2024-11-20 17:56:26.476584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:03.185 [2024-11-20 17:56:26.476589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.185 [2024-11-20 17:56:26.476593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:03.185 [2024-11-20 17:56:26.476598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:03.185 [2024-11-20 17:56:26.476603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:03.185 [2024-11-20 17:56:26.476608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:03.185 [2024-11-20 17:56:26.476613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:03.185 [2024-11-20 17:56:26.476617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:03.185 [2024-11-20 17:56:26.476622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:03.185 [2024-11-20 17:56:26.476627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:03.185 [2024-11-20 17:56:26.476632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.185 [2024-11-20 17:56:26.476636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:03.185 [2024-11-20 17:56:26.476641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:03.185 [2024-11-20 17:56:26.476646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.185 [2024-11-20 17:56:26.476651] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:03.185 [2024-11-20 17:56:26.476656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:03.185 [2024-11-20 17:56:26.476663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:03.185 [2024-11-20 17:56:26.476671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.185 [2024-11-20 17:56:26.476676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:03.185 [2024-11-20 17:56:26.476681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:03.185 [2024-11-20 17:56:26.476686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:03.185 [2024-11-20 17:56:26.476691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:03.185 [2024-11-20 17:56:26.476696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:03.185 [2024-11-20 17:56:26.476700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:03.185 [2024-11-20 17:56:26.476707] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:03.185 [2024-11-20 17:56:26.476714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:03.185 [2024-11-20 17:56:26.476720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:03.185 [2024-11-20 17:56:26.476725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:03.185 [2024-11-20 17:56:26.476731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:03.185 [2024-11-20 17:56:26.476736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:03.185 [2024-11-20 17:56:26.476741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:03.185 [2024-11-20 17:56:26.476746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:03.185 [2024-11-20 17:56:26.476752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:03.185 [2024-11-20 17:56:26.476757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:03.185 [2024-11-20 17:56:26.476762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:03.185 [2024-11-20 17:56:26.476767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:03.185 [2024-11-20 17:56:26.476772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:03.185 [2024-11-20 17:56:26.476777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:03.185 [2024-11-20 17:56:26.476782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:03.185 [2024-11-20 17:56:26.476788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:03.185 [2024-11-20 17:56:26.476793] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:03.185 [2024-11-20 17:56:26.476799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:03.185 [2024-11-20 17:56:26.476805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:03.185 [2024-11-20 17:56:26.476810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:03.185 [2024-11-20 17:56:26.476815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:03.185 [2024-11-20 17:56:26.476821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:03.185 [2024-11-20 17:56:26.476826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.185 [2024-11-20 17:56:26.476831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:03.185 [2024-11-20 17:56:26.476840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:26:03.185 [2024-11-20 17:56:26.476846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.185 [2024-11-20 17:56:26.497867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.185 [2024-11-20 17:56:26.497907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.185 [2024-11-20 17:56:26.497915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.972 ms 00:26:03.185 [2024-11-20 17:56:26.497921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.185 [2024-11-20 17:56:26.497987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.185 [2024-11-20 17:56:26.497996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:03.185 [2024-11-20 17:56:26.498002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:03.185 [2024-11-20 17:56:26.498008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.185 [2024-11-20 17:56:26.533828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.185 [2024-11-20 17:56:26.533862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.185 [2024-11-20 17:56:26.533887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.778 ms 00:26:03.185 [2024-11-20 17:56:26.533894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.185 [2024-11-20 17:56:26.533935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.185 [2024-11-20 17:56:26.533942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.185 [2024-11-20 17:56:26.533950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:03.185 [2024-11-20 17:56:26.533956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.185 [2024-11-20 17:56:26.534287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.185 [2024-11-20 17:56:26.534312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.185 [2024-11-20 17:56:26.534320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:26:03.185 [2024-11-20 17:56:26.534327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.185 [2024-11-20 17:56:26.534433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.185 [2024-11-20 17:56:26.534447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.185 [2024-11-20 17:56:26.534454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:26:03.185 [2024-11-20 17:56:26.534460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.185 [2024-11-20 17:56:26.545246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.185 [2024-11-20 17:56:26.545272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.185 [2024-11-20 17:56:26.545280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.770 ms 00:26:03.185 [2024-11-20 17:56:26.545286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.185 [2024-11-20 17:56:26.555180] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:03.185 [2024-11-20 17:56:26.555293] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:03.185 [2024-11-20 17:56:26.555306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.185 [2024-11-20 17:56:26.555313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:03.185 [2024-11-20 17:56:26.555319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.945 ms 00:26:03.185 [2024-11-20 17:56:26.555325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.185 [2024-11-20 17:56:26.574206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.185 [2024-11-20 17:56:26.574320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:03.185 [2024-11-20 17:56:26.574343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.852 ms 00:26:03.185 [2024-11-20 17:56:26.574350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.185 [2024-11-20 17:56:26.583332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.186 [2024-11-20 17:56:26.583357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:03.186 [2024-11-20 17:56:26.583366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.947 ms 00:26:03.186 [2024-11-20 17:56:26.583372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.186 [2024-11-20 17:56:26.592257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.186 [2024-11-20 17:56:26.592281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:03.186 [2024-11-20 17:56:26.592289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.856 ms 00:26:03.186 [2024-11-20 17:56:26.592294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.186 [2024-11-20 17:56:26.592768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.186 [2024-11-20 17:56:26.592778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:03.186 [2024-11-20 17:56:26.592785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:26:03.186 [2024-11-20 17:56:26.592790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.186 [2024-11-20 17:56:26.637452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.186 [2024-11-20 17:56:26.637496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:03.186 [2024-11-20 17:56:26.637506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.646 ms 00:26:03.186 [2024-11-20 17:56:26.637512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.186 [2024-11-20 17:56:26.645471] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:03.186 [2024-11-20 17:56:26.647566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.186 [2024-11-20 17:56:26.647591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:03.186 [2024-11-20 17:56:26.647600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.014 ms 00:26:03.186 [2024-11-20 17:56:26.647606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.186 [2024-11-20 17:56:26.647677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.186 [2024-11-20 17:56:26.647685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:03.186 [2024-11-20 17:56:26.647693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:03.186 [2024-11-20 17:56:26.647698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.186 [2024-11-20 17:56:26.647748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.186 [2024-11-20 17:56:26.647757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:03.186 [2024-11-20 17:56:26.647763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:03.186 [2024-11-20 17:56:26.647769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.186 [2024-11-20 17:56:26.647784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.186 [2024-11-20 17:56:26.647793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:03.186 [2024-11-20 17:56:26.647799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:03.186 [2024-11-20 17:56:26.647805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.186 [2024-11-20 17:56:26.647829] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:03.186 [2024-11-20 17:56:26.647837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.186 [2024-11-20 17:56:26.647843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:03.186 [2024-11-20 17:56:26.647849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:03.186 [2024-11-20 17:56:26.647855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.186 [2024-11-20 17:56:26.665684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.186 [2024-11-20 17:56:26.665713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:03.186 [2024-11-20 17:56:26.665723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.788 ms 00:26:03.186 [2024-11-20 17:56:26.665729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.186 [2024-11-20 17:56:26.665788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.186 [2024-11-20 17:56:26.665795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:03.186 [2024-11-20 17:56:26.665802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:03.186 [2024-11-20 17:56:26.665807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.186 [2024-11-20 17:56:26.666553] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 209.271 ms, result 0 00:26:04.571  [2024-11-20T17:56:28.685Z] Copying: 36/1024 [MB] (36 MBps) [2024-11-20T17:56:30.072Z] Copying: 57/1024 [MB] (20 MBps) [2024-11-20T17:56:31.015Z] Copying: 70/1024 [MB] (13 MBps) [2024-11-20T17:56:31.958Z] Copying: 93/1024 [MB] (22 MBps) [2024-11-20T17:56:32.903Z] Copying: 112/1024 [MB] (18 MBps) [2024-11-20T17:56:33.845Z] Copying: 131/1024 [MB] (18 MBps) [2024-11-20T17:56:34.789Z] Copying: 144/1024 [MB] (13 MBps) [2024-11-20T17:56:35.734Z] Copying: 165/1024 [MB] (20 MBps) [2024-11-20T17:56:37.122Z] Copying: 180/1024 [MB] (14 MBps) [2024-11-20T17:56:37.695Z] Copying: 197/1024 [MB] (17 MBps) [2024-11-20T17:56:39.080Z] Copying: 212/1024 [MB] (14 MBps) [2024-11-20T17:56:40.026Z] Copying: 224/1024 [MB] (12 MBps) [2024-11-20T17:56:40.969Z] Copying: 238/1024 [MB] (13 MBps) [2024-11-20T17:56:41.914Z] Copying: 256/1024 [MB] (18 MBps) [2024-11-20T17:56:42.859Z] Copying: 275/1024 [MB] (18 MBps) [2024-11-20T17:56:43.834Z] Copying: 286/1024 [MB] (10 MBps) [2024-11-20T17:56:44.795Z] Copying: 298/1024 [MB] (12 MBps) [2024-11-20T17:56:45.737Z] Copying: 312/1024 [MB] (14 MBps) [2024-11-20T17:56:47.123Z] Copying: 325/1024 [MB] (13 MBps) [2024-11-20T17:56:47.696Z] Copying: 342/1024 [MB] (17 MBps) [2024-11-20T17:56:49.083Z] Copying: 354/1024 [MB] (11 MBps) [2024-11-20T17:56:50.023Z] Copying: 364/1024 [MB] (10 MBps) [2024-11-20T17:56:50.966Z] Copying: 395/1024 [MB] (30 MBps) [2024-11-20T17:56:51.909Z] Copying: 436/1024 [MB] (41 MBps) [2024-11-20T17:56:52.853Z] Copying: 464/1024 [MB] (27 MBps) [2024-11-20T17:56:53.797Z] Copying: 474/1024 [MB] (10 MBps) [2024-11-20T17:56:54.741Z] Copying: 484/1024 [MB] (10 MBps) [2024-11-20T17:56:55.685Z] Copying: 495/1024 [MB] (10 MBps) [2024-11-20T17:56:57.072Z] Copying: 511/1024 [MB] (16 MBps) [2024-11-20T17:56:58.013Z] Copying: 522/1024 [MB] (11 MBps) [2024-11-20T17:56:58.957Z] Copying: 533/1024 [MB] (11 MBps) [2024-11-20T17:56:59.902Z] Copying: 557/1024 [MB] (23 MBps) [2024-11-20T17:57:00.844Z] Copying: 571/1024 [MB] (14 MBps) [2024-11-20T17:57:01.787Z] Copying: 584/1024 [MB] (13 MBps) [2024-11-20T17:57:02.730Z] Copying: 599/1024 [MB] (14 MBps) [2024-11-20T17:57:04.114Z] Copying: 617/1024 [MB] (18 MBps) [2024-11-20T17:57:04.684Z] Copying: 637/1024 [MB] (20 MBps) [2024-11-20T17:57:06.067Z] Copying: 663/1024 [MB] (25 MBps) [2024-11-20T17:57:07.011Z] Copying: 688/1024 [MB] (25 MBps) [2024-11-20T17:57:07.983Z] Copying: 703/1024 [MB] (14 MBps) [2024-11-20T17:57:08.927Z] Copying: 715/1024 [MB] (11 MBps) [2024-11-20T17:57:09.868Z] Copying: 729/1024 [MB] (14 MBps) [2024-11-20T17:57:10.808Z] Copying: 741/1024 [MB] (11 MBps) [2024-11-20T17:57:11.751Z] Copying: 760/1024 [MB] (18 MBps) [2024-11-20T17:57:12.693Z] Copying: 777/1024 [MB] (17 MBps) [2024-11-20T17:57:14.079Z] Copying: 788/1024 [MB] (11 MBps) [2024-11-20T17:57:15.121Z] Copying: 817/1024 [MB] (28 MBps) [2024-11-20T17:57:15.704Z] Copying: 837/1024 [MB] (19 MBps) [2024-11-20T17:57:17.090Z] Copying: 847/1024 [MB] (10 MBps) [2024-11-20T17:57:18.034Z] Copying: 862/1024 [MB] (15 MBps) [2024-11-20T17:57:18.977Z] Copying: 903/1024 [MB] (40 MBps) [2024-11-20T17:57:19.921Z] Copying: 928/1024 [MB] (25 MBps) [2024-11-20T17:57:20.866Z] Copying: 946/1024 [MB] (17 MBps) [2024-11-20T17:57:21.812Z] Copying: 973/1024 [MB] (26 MBps) [2024-11-20T17:57:22.757Z] Copying: 1013/1024 [MB] (40 MBps) [2024-11-20T17:57:23.020Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-20T17:57:23.020Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-20 17:57:22.870966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.480 [2024-11-20 17:57:22.871045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:59.480 [2024-11-20 17:57:22.871063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:59.480 [2024-11-20 17:57:22.871072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.480 [2024-11-20 17:57:22.874713] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:59.480 [2024-11-20 17:57:22.879580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.480 [2024-11-20 17:57:22.879626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:59.480 [2024-11-20 17:57:22.879642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.812 ms 00:26:59.480 [2024-11-20 17:57:22.879650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.480 [2024-11-20 17:57:22.894063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.480 [2024-11-20 17:57:22.894414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:59.480 [2024-11-20 17:57:22.894464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.199 ms 00:26:59.480 [2024-11-20 17:57:22.894489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.480 [2024-11-20 17:57:22.918835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.480 [2024-11-20 17:57:22.918905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:59.480 [2024-11-20 17:57:22.918918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.300 ms 00:26:59.480 [2024-11-20 17:57:22.918927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.480 [2024-11-20 17:57:22.925122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.480 [2024-11-20 17:57:22.925305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:59.480 [2024-11-20 17:57:22.925326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.156 ms 00:26:59.480 [2024-11-20 17:57:22.925334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.480 [2024-11-20 17:57:22.951677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.480 [2024-11-20 17:57:22.951722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:59.480 [2024-11-20 17:57:22.951734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.294 ms 00:26:59.480 [2024-11-20 17:57:22.951742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.480 [2024-11-20 17:57:22.967391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.480 [2024-11-20 17:57:22.967582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:59.480 [2024-11-20 17:57:22.967605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.604 ms 00:26:59.480 [2024-11-20 17:57:22.967613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.741 [2024-11-20 17:57:23.255448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.741 [2024-11-20 17:57:23.255501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:59.741 [2024-11-20 17:57:23.255522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 287.789 ms 00:26:59.741 [2024-11-20 17:57:23.255531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.004 [2024-11-20 17:57:23.281661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.004 [2024-11-20 17:57:23.281705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:00.004 [2024-11-20 17:57:23.281718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.114 ms 00:27:00.004 [2024-11-20 17:57:23.281726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.004 [2024-11-20 17:57:23.307235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.004 [2024-11-20 17:57:23.307279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:00.004 [2024-11-20 17:57:23.307290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.465 ms 00:27:00.004 [2024-11-20 17:57:23.307297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.004 [2024-11-20 17:57:23.332065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.004 [2024-11-20 17:57:23.332108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:00.004 [2024-11-20 17:57:23.332120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.724 ms 00:27:00.004 [2024-11-20 17:57:23.332127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.004 [2024-11-20 17:57:23.356765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.004 [2024-11-20 17:57:23.356809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:00.004 [2024-11-20 17:57:23.356820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.569 ms 00:27:00.004 [2024-11-20 17:57:23.356827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.004 [2024-11-20 17:57:23.356895] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:00.004 [2024-11-20 17:57:23.356911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100608 / 261120 wr_cnt: 1 state: open 00:27:00.004 [2024-11-20 17:57:23.356922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.356930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.356938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.356947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.356955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.356962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.356969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.356977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.356985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.356994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.357002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.357010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.357041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.357048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:00.004 [2024-11-20 17:57:23.357055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:00.005 [2024-11-20 17:57:23.357639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:00.006 [2024-11-20 17:57:23.357747] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:00.006 [2024-11-20 17:57:23.357755] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d9b3bbe-d8c2-4135-b83e-ca00f5a592b5 00:27:00.006 [2024-11-20 17:57:23.357763] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100608 00:27:00.006 [2024-11-20 17:57:23.357774] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101568 00:27:00.006 [2024-11-20 17:57:23.357789] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100608 00:27:00.006 [2024-11-20 17:57:23.357798] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:27:00.006 [2024-11-20 17:57:23.357806] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:00.006 [2024-11-20 17:57:23.357814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:00.006 [2024-11-20 17:57:23.357822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:00.006 [2024-11-20 17:57:23.357829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:00.006 [2024-11-20 17:57:23.357836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:00.006 [2024-11-20 17:57:23.357844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.006 [2024-11-20 17:57:23.357853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:00.006 [2024-11-20 17:57:23.357863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:27:00.006 [2024-11-20 17:57:23.357881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.006 [2024-11-20 17:57:23.371317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.006 [2024-11-20 17:57:23.371359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:00.006 [2024-11-20 17:57:23.371371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.417 ms 00:27:00.006 [2024-11-20 17:57:23.371379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.006 [2024-11-20 17:57:23.371773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.006 [2024-11-20 17:57:23.371782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:00.006 [2024-11-20 17:57:23.371791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:27:00.006 [2024-11-20 17:57:23.371805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.006 [2024-11-20 17:57:23.408368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.006 [2024-11-20 17:57:23.408417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:00.006 [2024-11-20 17:57:23.408430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.006 [2024-11-20 17:57:23.408440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.006 [2024-11-20 17:57:23.408497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.006 [2024-11-20 17:57:23.408507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:00.006 [2024-11-20 17:57:23.408516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.006 [2024-11-20 17:57:23.408531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.006 [2024-11-20 17:57:23.408617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.006 [2024-11-20 17:57:23.408629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:00.006 [2024-11-20 17:57:23.408638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.006 [2024-11-20 17:57:23.408647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.006 [2024-11-20 17:57:23.408663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.006 [2024-11-20 17:57:23.408673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:00.006 [2024-11-20 17:57:23.408682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.006 [2024-11-20 17:57:23.408691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.006 [2024-11-20 17:57:23.492249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.006 [2024-11-20 17:57:23.492304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:00.006 [2024-11-20 17:57:23.492318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.006 [2024-11-20 17:57:23.492326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.267 [2024-11-20 17:57:23.560938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.267 [2024-11-20 17:57:23.560989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:00.267 [2024-11-20 17:57:23.561001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.267 [2024-11-20 17:57:23.561010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.267 [2024-11-20 17:57:23.561074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.267 [2024-11-20 17:57:23.561085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:00.267 [2024-11-20 17:57:23.561094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.267 [2024-11-20 17:57:23.561102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.267 [2024-11-20 17:57:23.561158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.267 [2024-11-20 17:57:23.561168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:00.267 [2024-11-20 17:57:23.561176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.267 [2024-11-20 17:57:23.561184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.267 [2024-11-20 17:57:23.561286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.267 [2024-11-20 17:57:23.561297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:00.267 [2024-11-20 17:57:23.561306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.267 [2024-11-20 17:57:23.561315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.267 [2024-11-20 17:57:23.561347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.267 [2024-11-20 17:57:23.561357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:00.267 [2024-11-20 17:57:23.561365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.267 [2024-11-20 17:57:23.561374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.267 [2024-11-20 17:57:23.561415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.268 [2024-11-20 17:57:23.561428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:00.268 [2024-11-20 17:57:23.561437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.268 [2024-11-20 17:57:23.561445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.268 [2024-11-20 17:57:23.561492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.268 [2024-11-20 17:57:23.561503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:00.268 [2024-11-20 17:57:23.561512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.268 [2024-11-20 17:57:23.561520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.268 [2024-11-20 17:57:23.561656] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 692.013 ms, result 0 00:27:01.653 00:27:01.653 00:27:01.653 17:57:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:03.568 17:57:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:03.568 [2024-11-20 17:57:27.089130] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:27:03.568 [2024-11-20 17:57:27.089394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81225 ] 00:27:03.829 [2024-11-20 17:57:27.250444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.091 [2024-11-20 17:57:27.369346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.354 [2024-11-20 17:57:27.666917] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:04.354 [2024-11-20 17:57:27.666999] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:04.354 [2024-11-20 17:57:27.828763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.828827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:04.354 [2024-11-20 17:57:27.828847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:04.354 [2024-11-20 17:57:27.828856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.828935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.828948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:04.354 [2024-11-20 17:57:27.828960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:04.354 [2024-11-20 17:57:27.828970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.829013] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:04.354 [2024-11-20 17:57:27.829721] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:04.354 [2024-11-20 17:57:27.829769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.829778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:04.354 [2024-11-20 17:57:27.829788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:27:04.354 [2024-11-20 17:57:27.829796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.831496] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:04.354 [2024-11-20 17:57:27.846080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.846141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:04.354 [2024-11-20 17:57:27.846156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.586 ms 00:27:04.354 [2024-11-20 17:57:27.846164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.846255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.846266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:04.354 [2024-11-20 17:57:27.846276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:04.354 [2024-11-20 17:57:27.846284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.854987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.855024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:04.354 [2024-11-20 17:57:27.855035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.625 ms 00:27:04.354 [2024-11-20 17:57:27.855053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.855162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.855173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:04.354 [2024-11-20 17:57:27.855182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:04.354 [2024-11-20 17:57:27.855191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.855236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.855246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:04.354 [2024-11-20 17:57:27.855254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:04.354 [2024-11-20 17:57:27.855261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.855289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:04.354 [2024-11-20 17:57:27.859507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.859546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:04.354 [2024-11-20 17:57:27.859558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.227 ms 00:27:04.354 [2024-11-20 17:57:27.859570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.859606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.859615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:04.354 [2024-11-20 17:57:27.859624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:04.354 [2024-11-20 17:57:27.859631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.859683] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:04.354 [2024-11-20 17:57:27.859708] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:04.354 [2024-11-20 17:57:27.859746] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:04.354 [2024-11-20 17:57:27.859766] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:04.354 [2024-11-20 17:57:27.859887] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:04.354 [2024-11-20 17:57:27.859900] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:04.354 [2024-11-20 17:57:27.859911] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:04.354 [2024-11-20 17:57:27.859922] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:04.354 [2024-11-20 17:57:27.859932] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:04.354 [2024-11-20 17:57:27.859940] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:04.354 [2024-11-20 17:57:27.859948] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:04.354 [2024-11-20 17:57:27.859957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:04.354 [2024-11-20 17:57:27.859968] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:04.354 [2024-11-20 17:57:27.859977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.859985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:04.354 [2024-11-20 17:57:27.859993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:27:04.354 [2024-11-20 17:57:27.860001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.860084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.354 [2024-11-20 17:57:27.860092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:04.354 [2024-11-20 17:57:27.860100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:04.354 [2024-11-20 17:57:27.860108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.354 [2024-11-20 17:57:27.860217] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:04.354 [2024-11-20 17:57:27.860228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:04.354 [2024-11-20 17:57:27.860237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:04.354 [2024-11-20 17:57:27.860247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.354 [2024-11-20 17:57:27.860255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:04.354 [2024-11-20 17:57:27.860262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:04.354 [2024-11-20 17:57:27.860269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:04.354 [2024-11-20 17:57:27.860277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:04.354 [2024-11-20 17:57:27.860285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:04.354 [2024-11-20 17:57:27.860292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:04.354 [2024-11-20 17:57:27.860299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:04.354 [2024-11-20 17:57:27.860305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:04.354 [2024-11-20 17:57:27.860311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:04.354 [2024-11-20 17:57:27.860318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:04.354 [2024-11-20 17:57:27.860325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:04.354 [2024-11-20 17:57:27.860339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.354 [2024-11-20 17:57:27.860346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:04.354 [2024-11-20 17:57:27.860360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:04.355 [2024-11-20 17:57:27.860367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.355 [2024-11-20 17:57:27.860374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:04.355 [2024-11-20 17:57:27.860382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:04.355 [2024-11-20 17:57:27.860388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.355 [2024-11-20 17:57:27.860396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:04.355 [2024-11-20 17:57:27.860403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:04.355 [2024-11-20 17:57:27.860410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.355 [2024-11-20 17:57:27.860417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:04.355 [2024-11-20 17:57:27.860424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:04.355 [2024-11-20 17:57:27.860431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.355 [2024-11-20 17:57:27.860438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:04.355 [2024-11-20 17:57:27.860445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:04.355 [2024-11-20 17:57:27.860452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:04.355 [2024-11-20 17:57:27.860459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:04.355 [2024-11-20 17:57:27.860466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:04.355 [2024-11-20 17:57:27.860473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:04.355 [2024-11-20 17:57:27.860479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:04.355 [2024-11-20 17:57:27.860486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:04.355 [2024-11-20 17:57:27.860493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:04.355 [2024-11-20 17:57:27.860499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:04.355 [2024-11-20 17:57:27.860505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:04.355 [2024-11-20 17:57:27.860512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.355 [2024-11-20 17:57:27.860519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:04.355 [2024-11-20 17:57:27.860525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:04.355 [2024-11-20 17:57:27.860532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.355 [2024-11-20 17:57:27.860539] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:04.355 [2024-11-20 17:57:27.860547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:04.355 [2024-11-20 17:57:27.860554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:04.355 [2024-11-20 17:57:27.860562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:04.355 [2024-11-20 17:57:27.860570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:04.355 [2024-11-20 17:57:27.860576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:04.355 [2024-11-20 17:57:27.860585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:04.355 [2024-11-20 17:57:27.860592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:04.355 [2024-11-20 17:57:27.860599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:04.355 [2024-11-20 17:57:27.860606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:04.355 [2024-11-20 17:57:27.860614] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:04.355 [2024-11-20 17:57:27.860624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.355 [2024-11-20 17:57:27.860632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:04.355 [2024-11-20 17:57:27.860640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:04.355 [2024-11-20 17:57:27.860647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:04.355 [2024-11-20 17:57:27.860654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:04.355 [2024-11-20 17:57:27.860662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:04.355 [2024-11-20 17:57:27.860669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:04.355 [2024-11-20 17:57:27.860676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:04.355 [2024-11-20 17:57:27.860683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:04.355 [2024-11-20 17:57:27.860690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:04.355 [2024-11-20 17:57:27.860698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:04.355 [2024-11-20 17:57:27.860705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:04.355 [2024-11-20 17:57:27.860730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:04.355 [2024-11-20 17:57:27.860738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:04.355 [2024-11-20 17:57:27.860746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:04.355 [2024-11-20 17:57:27.860754] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:04.355 [2024-11-20 17:57:27.860772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.355 [2024-11-20 17:57:27.860781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:04.355 [2024-11-20 17:57:27.860789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:04.355 [2024-11-20 17:57:27.860796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:04.355 [2024-11-20 17:57:27.860803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:04.355 [2024-11-20 17:57:27.860811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.355 [2024-11-20 17:57:27.860819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:04.355 [2024-11-20 17:57:27.860827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:27:04.355 [2024-11-20 17:57:27.860835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:27.892905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:27.892948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:04.617 [2024-11-20 17:57:27.892962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.013 ms 00:27:04.617 [2024-11-20 17:57:27.892971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:27.893073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:27.893082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:04.617 [2024-11-20 17:57:27.893091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:04.617 [2024-11-20 17:57:27.893100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:27.946089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:27.946295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:04.617 [2024-11-20 17:57:27.946318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.930 ms 00:27:04.617 [2024-11-20 17:57:27.946328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:27.946380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:27.946391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:04.617 [2024-11-20 17:57:27.946409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:04.617 [2024-11-20 17:57:27.946416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:27.947014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:27.947045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:04.617 [2024-11-20 17:57:27.947057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:27:04.617 [2024-11-20 17:57:27.947065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:27.947239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:27.947254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:04.617 [2024-11-20 17:57:27.947265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:27:04.617 [2024-11-20 17:57:27.947279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:27.963368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:27.963408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:04.617 [2024-11-20 17:57:27.963422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.067 ms 00:27:04.617 [2024-11-20 17:57:27.963431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:27.977726] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:04.617 [2024-11-20 17:57:27.977932] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:04.617 [2024-11-20 17:57:27.977954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:27.977963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:04.617 [2024-11-20 17:57:27.977974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.410 ms 00:27:04.617 [2024-11-20 17:57:27.977981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.004334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.004379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:04.617 [2024-11-20 17:57:28.004392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.205 ms 00:27:04.617 [2024-11-20 17:57:28.004401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.017281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.017331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:04.617 [2024-11-20 17:57:28.017343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.825 ms 00:27:04.617 [2024-11-20 17:57:28.017351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.030073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.030113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:04.617 [2024-11-20 17:57:28.030125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.674 ms 00:27:04.617 [2024-11-20 17:57:28.030133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.030783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.030807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:04.617 [2024-11-20 17:57:28.030818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:27:04.617 [2024-11-20 17:57:28.030830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.097568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.097637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:04.617 [2024-11-20 17:57:28.097662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.716 ms 00:27:04.617 [2024-11-20 17:57:28.097671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.109069] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:04.617 [2024-11-20 17:57:28.112117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.112291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:04.617 [2024-11-20 17:57:28.112311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.389 ms 00:27:04.617 [2024-11-20 17:57:28.112320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.112410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.112422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:04.617 [2024-11-20 17:57:28.112431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:04.617 [2024-11-20 17:57:28.112443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.114129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.114175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:04.617 [2024-11-20 17:57:28.114186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:27:04.617 [2024-11-20 17:57:28.114193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.114222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.114231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:04.617 [2024-11-20 17:57:28.114240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:04.617 [2024-11-20 17:57:28.114247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.114293] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:04.617 [2024-11-20 17:57:28.114304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.114313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:04.617 [2024-11-20 17:57:28.114322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:04.617 [2024-11-20 17:57:28.114331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.139920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.139970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:04.617 [2024-11-20 17:57:28.139983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.569 ms 00:27:04.617 [2024-11-20 17:57:28.139998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.140089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.617 [2024-11-20 17:57:28.140101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:04.617 [2024-11-20 17:57:28.140111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:27:04.617 [2024-11-20 17:57:28.140119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.617 [2024-11-20 17:57:28.141397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.119 ms, result 0 00:27:06.004  [2024-11-20T17:57:30.489Z] Copying: 1108/1048576 [kB] (1108 kBps) [2024-11-20T17:57:31.433Z] Copying: 5896/1048576 [kB] (4788 kBps) [2024-11-20T17:57:32.380Z] Copying: 24/1024 [MB] (19 MBps) [2024-11-20T17:57:33.767Z] Copying: 42/1024 [MB] (17 MBps) [2024-11-20T17:57:34.339Z] Copying: 62/1024 [MB] (20 MBps) [2024-11-20T17:57:35.727Z] Copying: 87/1024 [MB] (25 MBps) [2024-11-20T17:57:36.672Z] Copying: 117/1024 [MB] (29 MBps) [2024-11-20T17:57:37.614Z] Copying: 137/1024 [MB] (19 MBps) [2024-11-20T17:57:38.558Z] Copying: 153/1024 [MB] (16 MBps) [2024-11-20T17:57:39.500Z] Copying: 169/1024 [MB] (16 MBps) [2024-11-20T17:57:40.445Z] Copying: 192/1024 [MB] (22 MBps) [2024-11-20T17:57:41.389Z] Copying: 220/1024 [MB] (28 MBps) [2024-11-20T17:57:42.334Z] Copying: 247/1024 [MB] (26 MBps) [2024-11-20T17:57:43.721Z] Copying: 275/1024 [MB] (28 MBps) [2024-11-20T17:57:44.667Z] Copying: 310/1024 [MB] (34 MBps) [2024-11-20T17:57:45.612Z] Copying: 333/1024 [MB] (23 MBps) [2024-11-20T17:57:46.558Z] Copying: 358/1024 [MB] (25 MBps) [2024-11-20T17:57:47.505Z] Copying: 381/1024 [MB] (23 MBps) [2024-11-20T17:57:48.448Z] Copying: 407/1024 [MB] (25 MBps) [2024-11-20T17:57:49.391Z] Copying: 440/1024 [MB] (33 MBps) [2024-11-20T17:57:50.408Z] Copying: 465/1024 [MB] (24 MBps) [2024-11-20T17:57:51.352Z] Copying: 499/1024 [MB] (34 MBps) [2024-11-20T17:57:52.741Z] Copying: 531/1024 [MB] (32 MBps) [2024-11-20T17:57:53.684Z] Copying: 561/1024 [MB] (29 MBps) [2024-11-20T17:57:54.630Z] Copying: 596/1024 [MB] (35 MBps) [2024-11-20T17:57:55.573Z] Copying: 621/1024 [MB] (25 MBps) [2024-11-20T17:57:56.517Z] Copying: 647/1024 [MB] (25 MBps) [2024-11-20T17:57:57.461Z] Copying: 674/1024 [MB] (27 MBps) [2024-11-20T17:57:58.403Z] Copying: 709/1024 [MB] (35 MBps) [2024-11-20T17:57:59.348Z] Copying: 749/1024 [MB] (39 MBps) [2024-11-20T17:58:00.736Z] Copying: 776/1024 [MB] (26 MBps) [2024-11-20T17:58:01.680Z] Copying: 803/1024 [MB] (26 MBps) [2024-11-20T17:58:02.622Z] Copying: 829/1024 [MB] (26 MBps) [2024-11-20T17:58:03.566Z] Copying: 858/1024 [MB] (28 MBps) [2024-11-20T17:58:04.509Z] Copying: 890/1024 [MB] (32 MBps) [2024-11-20T17:58:05.454Z] Copying: 914/1024 [MB] (23 MBps) [2024-11-20T17:58:06.397Z] Copying: 938/1024 [MB] (24 MBps) [2024-11-20T17:58:07.340Z] Copying: 968/1024 [MB] (29 MBps) [2024-11-20T17:58:08.728Z] Copying: 995/1024 [MB] (26 MBps) [2024-11-20T17:58:08.728Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-20 17:58:08.421212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.188 [2024-11-20 17:58:08.421322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:45.188 [2024-11-20 17:58:08.421342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:45.188 [2024-11-20 17:58:08.421354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.188 [2024-11-20 17:58:08.421384] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:45.188 [2024-11-20 17:58:08.425817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.188 [2024-11-20 17:58:08.426023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:45.188 [2024-11-20 17:58:08.426243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.410 ms 00:27:45.188 [2024-11-20 17:58:08.426271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.188 [2024-11-20 17:58:08.426578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.188 [2024-11-20 17:58:08.426592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:45.188 [2024-11-20 17:58:08.426607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:27:45.188 [2024-11-20 17:58:08.426617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.188 [2024-11-20 17:58:08.441149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.188 [2024-11-20 17:58:08.441311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:45.188 [2024-11-20 17:58:08.441331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.511 ms 00:27:45.188 [2024-11-20 17:58:08.441340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.188 [2024-11-20 17:58:08.447679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.188 [2024-11-20 17:58:08.447714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:45.188 [2024-11-20 17:58:08.447734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.303 ms 00:27:45.188 [2024-11-20 17:58:08.447743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.188 [2024-11-20 17:58:08.474346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.188 [2024-11-20 17:58:08.474383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:45.188 [2024-11-20 17:58:08.474394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.541 ms 00:27:45.188 [2024-11-20 17:58:08.474403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.188 [2024-11-20 17:58:08.490876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.188 [2024-11-20 17:58:08.490918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:45.188 [2024-11-20 17:58:08.490932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.419 ms 00:27:45.188 [2024-11-20 17:58:08.490942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.188 [2024-11-20 17:58:08.496015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.188 [2024-11-20 17:58:08.496051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:45.188 [2024-11-20 17:58:08.496062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.019 ms 00:27:45.188 [2024-11-20 17:58:08.496071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.188 [2024-11-20 17:58:08.522301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.188 [2024-11-20 17:58:08.522478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:45.188 [2024-11-20 17:58:08.522499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.206 ms 00:27:45.188 [2024-11-20 17:58:08.522508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.188 [2024-11-20 17:58:08.548123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.188 [2024-11-20 17:58:08.548169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:45.188 [2024-11-20 17:58:08.548193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.316 ms 00:27:45.188 [2024-11-20 17:58:08.548201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.188 [2024-11-20 17:58:08.572884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.188 [2024-11-20 17:58:08.573063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:45.188 [2024-11-20 17:58:08.573085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.620 ms 00:27:45.188 [2024-11-20 17:58:08.573094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.188 [2024-11-20 17:58:08.597929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.189 [2024-11-20 17:58:08.597965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:45.189 [2024-11-20 17:58:08.597978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.674 ms 00:27:45.189 [2024-11-20 17:58:08.597986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.189 [2024-11-20 17:58:08.598031] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:45.189 [2024-11-20 17:58:08.598048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:45.189 [2024-11-20 17:58:08.598059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:45.189 [2024-11-20 17:58:08.598068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:45.189 [2024-11-20 17:58:08.598721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:45.190 [2024-11-20 17:58:08.598833] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:45.190 [2024-11-20 17:58:08.598843] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d9b3bbe-d8c2-4135-b83e-ca00f5a592b5 00:27:45.190 [2024-11-20 17:58:08.598851] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:45.190 [2024-11-20 17:58:08.598859] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 164032 00:27:45.190 [2024-11-20 17:58:08.598866] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 162048 00:27:45.190 [2024-11-20 17:58:08.598894] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0122 00:27:45.190 [2024-11-20 17:58:08.598902] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:45.190 [2024-11-20 17:58:08.598910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:45.190 [2024-11-20 17:58:08.598918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:45.190 [2024-11-20 17:58:08.598932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:45.190 [2024-11-20 17:58:08.598939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:45.190 [2024-11-20 17:58:08.598947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.190 [2024-11-20 17:58:08.598961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:45.190 [2024-11-20 17:58:08.598971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:27:45.190 [2024-11-20 17:58:08.598979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.190 [2024-11-20 17:58:08.612790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.190 [2024-11-20 17:58:08.612978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:45.190 [2024-11-20 17:58:08.612998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.791 ms 00:27:45.190 [2024-11-20 17:58:08.613007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.190 [2024-11-20 17:58:08.613410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.190 [2024-11-20 17:58:08.613420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:45.190 [2024-11-20 17:58:08.613430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:27:45.190 [2024-11-20 17:58:08.613437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.190 [2024-11-20 17:58:08.650056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.190 [2024-11-20 17:58:08.650091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:45.190 [2024-11-20 17:58:08.650103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.190 [2024-11-20 17:58:08.650113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.190 [2024-11-20 17:58:08.650181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.190 [2024-11-20 17:58:08.650190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:45.190 [2024-11-20 17:58:08.650199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.190 [2024-11-20 17:58:08.650207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.190 [2024-11-20 17:58:08.650303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.190 [2024-11-20 17:58:08.650315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:45.190 [2024-11-20 17:58:08.650325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.190 [2024-11-20 17:58:08.650333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.190 [2024-11-20 17:58:08.650350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.190 [2024-11-20 17:58:08.650358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:45.190 [2024-11-20 17:58:08.650367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.190 [2024-11-20 17:58:08.650375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.451 [2024-11-20 17:58:08.735407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.451 [2024-11-20 17:58:08.735455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:45.451 [2024-11-20 17:58:08.735468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.451 [2024-11-20 17:58:08.735478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.451 [2024-11-20 17:58:08.805120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.451 [2024-11-20 17:58:08.805171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:45.451 [2024-11-20 17:58:08.805183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.451 [2024-11-20 17:58:08.805192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.451 [2024-11-20 17:58:08.805251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.451 [2024-11-20 17:58:08.805268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:45.451 [2024-11-20 17:58:08.805277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.451 [2024-11-20 17:58:08.805285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.451 [2024-11-20 17:58:08.805346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.451 [2024-11-20 17:58:08.805356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:45.451 [2024-11-20 17:58:08.805365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.451 [2024-11-20 17:58:08.805373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.451 [2024-11-20 17:58:08.805469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.451 [2024-11-20 17:58:08.805479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:45.451 [2024-11-20 17:58:08.805491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.451 [2024-11-20 17:58:08.805500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.451 [2024-11-20 17:58:08.805532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.451 [2024-11-20 17:58:08.805541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:45.451 [2024-11-20 17:58:08.805551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.451 [2024-11-20 17:58:08.805559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.451 [2024-11-20 17:58:08.805603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.451 [2024-11-20 17:58:08.805613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:45.451 [2024-11-20 17:58:08.805626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.451 [2024-11-20 17:58:08.805637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.451 [2024-11-20 17:58:08.805688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.451 [2024-11-20 17:58:08.805698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:45.451 [2024-11-20 17:58:08.805708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.451 [2024-11-20 17:58:08.805716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.451 [2024-11-20 17:58:08.805851] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.609 ms, result 0 00:27:46.393 00:27:46.393 00:27:46.393 17:58:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:48.943 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:48.943 17:58:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:48.943 [2024-11-20 17:58:11.924005] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:27:48.943 [2024-11-20 17:58:11.924152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81680 ] 00:27:48.943 [2024-11-20 17:58:12.086741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.943 [2024-11-20 17:58:12.210634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.205 [2024-11-20 17:58:12.504503] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:49.205 [2024-11-20 17:58:12.504590] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:49.205 [2024-11-20 17:58:12.666467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.205 [2024-11-20 17:58:12.666527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:49.205 [2024-11-20 17:58:12.666546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:49.205 [2024-11-20 17:58:12.666555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.205 [2024-11-20 17:58:12.666611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.205 [2024-11-20 17:58:12.666622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:49.205 [2024-11-20 17:58:12.666633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:49.205 [2024-11-20 17:58:12.666641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.205 [2024-11-20 17:58:12.666663] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:49.205 [2024-11-20 17:58:12.667576] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:49.205 [2024-11-20 17:58:12.667631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.205 [2024-11-20 17:58:12.667640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:49.205 [2024-11-20 17:58:12.667650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:27:49.205 [2024-11-20 17:58:12.667658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.205 [2024-11-20 17:58:12.669420] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:49.205 [2024-11-20 17:58:12.683727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.205 [2024-11-20 17:58:12.683942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:49.205 [2024-11-20 17:58:12.683965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.308 ms 00:27:49.205 [2024-11-20 17:58:12.683973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.205 [2024-11-20 17:58:12.684047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.205 [2024-11-20 17:58:12.684057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:49.205 [2024-11-20 17:58:12.684066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:49.205 [2024-11-20 17:58:12.684074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.205 [2024-11-20 17:58:12.692002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.205 [2024-11-20 17:58:12.692045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:49.205 [2024-11-20 17:58:12.692055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.848 ms 00:27:49.205 [2024-11-20 17:58:12.692071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.205 [2024-11-20 17:58:12.692153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.205 [2024-11-20 17:58:12.692163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:49.205 [2024-11-20 17:58:12.692171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:49.205 [2024-11-20 17:58:12.692179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.205 [2024-11-20 17:58:12.692222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.205 [2024-11-20 17:58:12.692232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:49.205 [2024-11-20 17:58:12.692240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:49.205 [2024-11-20 17:58:12.692248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.205 [2024-11-20 17:58:12.692277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:49.205 [2024-11-20 17:58:12.696283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.205 [2024-11-20 17:58:12.696318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:49.205 [2024-11-20 17:58:12.696329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.017 ms 00:27:49.206 [2024-11-20 17:58:12.696341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.206 [2024-11-20 17:58:12.696375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.206 [2024-11-20 17:58:12.696384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:49.206 [2024-11-20 17:58:12.696392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:49.206 [2024-11-20 17:58:12.696400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.206 [2024-11-20 17:58:12.696451] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:49.206 [2024-11-20 17:58:12.696474] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:49.206 [2024-11-20 17:58:12.696512] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:49.206 [2024-11-20 17:58:12.696531] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:49.206 [2024-11-20 17:58:12.696636] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:49.206 [2024-11-20 17:58:12.696647] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:49.206 [2024-11-20 17:58:12.696658] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:49.206 [2024-11-20 17:58:12.696668] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:49.206 [2024-11-20 17:58:12.696677] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:49.206 [2024-11-20 17:58:12.696686] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:49.206 [2024-11-20 17:58:12.696693] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:49.206 [2024-11-20 17:58:12.696701] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:49.206 [2024-11-20 17:58:12.696712] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:49.206 [2024-11-20 17:58:12.696720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.206 [2024-11-20 17:58:12.696727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:49.206 [2024-11-20 17:58:12.696734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:27:49.206 [2024-11-20 17:58:12.696743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.206 [2024-11-20 17:58:12.696825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.206 [2024-11-20 17:58:12.696834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:49.206 [2024-11-20 17:58:12.696842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:49.206 [2024-11-20 17:58:12.696849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.206 [2024-11-20 17:58:12.696977] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:49.206 [2024-11-20 17:58:12.696990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:49.206 [2024-11-20 17:58:12.696999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:49.206 [2024-11-20 17:58:12.697007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:49.206 [2024-11-20 17:58:12.697022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:49.206 [2024-11-20 17:58:12.697038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:49.206 [2024-11-20 17:58:12.697045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:49.206 [2024-11-20 17:58:12.697059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:49.206 [2024-11-20 17:58:12.697066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:49.206 [2024-11-20 17:58:12.697073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:49.206 [2024-11-20 17:58:12.697083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:49.206 [2024-11-20 17:58:12.697090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:49.206 [2024-11-20 17:58:12.697104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:49.206 [2024-11-20 17:58:12.697118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:49.206 [2024-11-20 17:58:12.697124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:49.206 [2024-11-20 17:58:12.697137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.206 [2024-11-20 17:58:12.697150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:49.206 [2024-11-20 17:58:12.697158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.206 [2024-11-20 17:58:12.697171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:49.206 [2024-11-20 17:58:12.697178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.206 [2024-11-20 17:58:12.697201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:49.206 [2024-11-20 17:58:12.697208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:49.206 [2024-11-20 17:58:12.697222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:49.206 [2024-11-20 17:58:12.697228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:49.206 [2024-11-20 17:58:12.697241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:49.206 [2024-11-20 17:58:12.697248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:49.206 [2024-11-20 17:58:12.697254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:49.206 [2024-11-20 17:58:12.697261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:49.206 [2024-11-20 17:58:12.697267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:49.206 [2024-11-20 17:58:12.697274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:49.206 [2024-11-20 17:58:12.697287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:49.206 [2024-11-20 17:58:12.697293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697300] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:49.206 [2024-11-20 17:58:12.697308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:49.206 [2024-11-20 17:58:12.697316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:49.206 [2024-11-20 17:58:12.697323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:49.206 [2024-11-20 17:58:12.697332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:49.206 [2024-11-20 17:58:12.697338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:49.206 [2024-11-20 17:58:12.697346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:49.206 [2024-11-20 17:58:12.697353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:49.206 [2024-11-20 17:58:12.697359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:49.206 [2024-11-20 17:58:12.697366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:49.206 [2024-11-20 17:58:12.697374] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:49.206 [2024-11-20 17:58:12.697383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.206 [2024-11-20 17:58:12.697392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:49.206 [2024-11-20 17:58:12.697400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:49.206 [2024-11-20 17:58:12.697408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:49.206 [2024-11-20 17:58:12.697414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:49.206 [2024-11-20 17:58:12.697421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:49.206 [2024-11-20 17:58:12.697428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:49.206 [2024-11-20 17:58:12.697435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:49.206 [2024-11-20 17:58:12.697442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:49.206 [2024-11-20 17:58:12.697449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:49.206 [2024-11-20 17:58:12.697456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:49.206 [2024-11-20 17:58:12.697463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:49.206 [2024-11-20 17:58:12.697470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:49.206 [2024-11-20 17:58:12.697476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:49.206 [2024-11-20 17:58:12.697483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:49.206 [2024-11-20 17:58:12.697491] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:49.206 [2024-11-20 17:58:12.697503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.206 [2024-11-20 17:58:12.697511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:49.207 [2024-11-20 17:58:12.697518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:49.207 [2024-11-20 17:58:12.697525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:49.207 [2024-11-20 17:58:12.697532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:49.207 [2024-11-20 17:58:12.697539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.207 [2024-11-20 17:58:12.697547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:49.207 [2024-11-20 17:58:12.697556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:27:49.207 [2024-11-20 17:58:12.697564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.207 [2024-11-20 17:58:12.729403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.207 [2024-11-20 17:58:12.729449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:49.207 [2024-11-20 17:58:12.729461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.794 ms 00:27:49.207 [2024-11-20 17:58:12.729469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.207 [2024-11-20 17:58:12.729563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.207 [2024-11-20 17:58:12.729572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:49.207 [2024-11-20 17:58:12.729580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:49.207 [2024-11-20 17:58:12.729587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.469 [2024-11-20 17:58:12.776177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.469 [2024-11-20 17:58:12.776228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:49.469 [2024-11-20 17:58:12.776241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.534 ms 00:27:49.469 [2024-11-20 17:58:12.776249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.469 [2024-11-20 17:58:12.776299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.469 [2024-11-20 17:58:12.776309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:49.469 [2024-11-20 17:58:12.776323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:49.469 [2024-11-20 17:58:12.776331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.469 [2024-11-20 17:58:12.776924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.469 [2024-11-20 17:58:12.776954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:49.469 [2024-11-20 17:58:12.776964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:27:49.469 [2024-11-20 17:58:12.776973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.469 [2024-11-20 17:58:12.777133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.469 [2024-11-20 17:58:12.777153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:49.469 [2024-11-20 17:58:12.777162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:27:49.469 [2024-11-20 17:58:12.777177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.469 [2024-11-20 17:58:12.792941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.469 [2024-11-20 17:58:12.792983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:49.469 [2024-11-20 17:58:12.792998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.743 ms 00:27:49.469 [2024-11-20 17:58:12.793006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.469 [2024-11-20 17:58:12.807142] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:49.469 [2024-11-20 17:58:12.807331] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:49.469 [2024-11-20 17:58:12.807352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.469 [2024-11-20 17:58:12.807361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:49.469 [2024-11-20 17:58:12.807372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.238 ms 00:27:49.469 [2024-11-20 17:58:12.807379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.469 [2024-11-20 17:58:12.833099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.469 [2024-11-20 17:58:12.833146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:49.469 [2024-11-20 17:58:12.833158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.676 ms 00:27:49.469 [2024-11-20 17:58:12.833166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.469 [2024-11-20 17:58:12.845818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.469 [2024-11-20 17:58:12.845860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:49.469 [2024-11-20 17:58:12.845884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.600 ms 00:27:49.470 [2024-11-20 17:58:12.845893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.470 [2024-11-20 17:58:12.858420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.470 [2024-11-20 17:58:12.858461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:49.470 [2024-11-20 17:58:12.858473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.481 ms 00:27:49.470 [2024-11-20 17:58:12.858481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.470 [2024-11-20 17:58:12.859168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.470 [2024-11-20 17:58:12.859193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:49.470 [2024-11-20 17:58:12.859204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:27:49.470 [2024-11-20 17:58:12.859215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.470 [2024-11-20 17:58:12.925195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.470 [2024-11-20 17:58:12.925427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:49.470 [2024-11-20 17:58:12.925460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.960 ms 00:27:49.470 [2024-11-20 17:58:12.925470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.470 [2024-11-20 17:58:12.936965] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:49.470 [2024-11-20 17:58:12.940047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.470 [2024-11-20 17:58:12.940216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:49.470 [2024-11-20 17:58:12.940236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.455 ms 00:27:49.470 [2024-11-20 17:58:12.940246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.470 [2024-11-20 17:58:12.940334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.470 [2024-11-20 17:58:12.940346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:49.470 [2024-11-20 17:58:12.940356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:49.470 [2024-11-20 17:58:12.940367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.470 [2024-11-20 17:58:12.941196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.470 [2024-11-20 17:58:12.941234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:49.470 [2024-11-20 17:58:12.941246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:27:49.470 [2024-11-20 17:58:12.941255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.470 [2024-11-20 17:58:12.941285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.470 [2024-11-20 17:58:12.941294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:49.470 [2024-11-20 17:58:12.941304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:49.470 [2024-11-20 17:58:12.941313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.470 [2024-11-20 17:58:12.941360] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:49.470 [2024-11-20 17:58:12.941372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.470 [2024-11-20 17:58:12.941381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:49.470 [2024-11-20 17:58:12.941391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:49.470 [2024-11-20 17:58:12.941400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.470 [2024-11-20 17:58:12.966763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.470 [2024-11-20 17:58:12.966808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:49.470 [2024-11-20 17:58:12.966822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.344 ms 00:27:49.470 [2024-11-20 17:58:12.966837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.470 [2024-11-20 17:58:12.966943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.470 [2024-11-20 17:58:12.966956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:49.470 [2024-11-20 17:58:12.966966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:27:49.470 [2024-11-20 17:58:12.966974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.470 [2024-11-20 17:58:12.968205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.231 ms, result 0 00:27:50.855  [2024-11-20T17:58:15.337Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T17:58:16.281Z] Copying: 39/1024 [MB] (13 MBps) [2024-11-20T17:58:17.225Z] Copying: 53/1024 [MB] (13 MBps) [2024-11-20T17:58:18.167Z] Copying: 74/1024 [MB] (20 MBps) [2024-11-20T17:58:19.552Z] Copying: 88/1024 [MB] (14 MBps) [2024-11-20T17:58:20.494Z] Copying: 102/1024 [MB] (13 MBps) [2024-11-20T17:58:21.436Z] Copying: 117/1024 [MB] (15 MBps) [2024-11-20T17:58:22.380Z] Copying: 132/1024 [MB] (14 MBps) [2024-11-20T17:58:23.324Z] Copying: 145/1024 [MB] (13 MBps) [2024-11-20T17:58:24.267Z] Copying: 162/1024 [MB] (17 MBps) [2024-11-20T17:58:25.320Z] Copying: 174/1024 [MB] (11 MBps) [2024-11-20T17:58:26.264Z] Copying: 193/1024 [MB] (18 MBps) [2024-11-20T17:58:27.207Z] Copying: 208/1024 [MB] (14 MBps) [2024-11-20T17:58:28.151Z] Copying: 229/1024 [MB] (21 MBps) [2024-11-20T17:58:29.538Z] Copying: 247/1024 [MB] (17 MBps) [2024-11-20T17:58:30.482Z] Copying: 265/1024 [MB] (18 MBps) [2024-11-20T17:58:31.425Z] Copying: 278/1024 [MB] (12 MBps) [2024-11-20T17:58:32.369Z] Copying: 298/1024 [MB] (19 MBps) [2024-11-20T17:58:33.313Z] Copying: 311/1024 [MB] (13 MBps) [2024-11-20T17:58:34.257Z] Copying: 326/1024 [MB] (14 MBps) [2024-11-20T17:58:35.202Z] Copying: 338/1024 [MB] (12 MBps) [2024-11-20T17:58:36.591Z] Copying: 352/1024 [MB] (13 MBps) [2024-11-20T17:58:37.162Z] Copying: 368/1024 [MB] (15 MBps) [2024-11-20T17:58:38.549Z] Copying: 386/1024 [MB] (18 MBps) [2024-11-20T17:58:39.492Z] Copying: 399/1024 [MB] (12 MBps) [2024-11-20T17:58:40.435Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-20T17:58:41.380Z] Copying: 427/1024 [MB] (16 MBps) [2024-11-20T17:58:42.325Z] Copying: 439/1024 [MB] (12 MBps) [2024-11-20T17:58:43.268Z] Copying: 450/1024 [MB] (10 MBps) [2024-11-20T17:58:44.212Z] Copying: 465/1024 [MB] (15 MBps) [2024-11-20T17:58:45.157Z] Copying: 478/1024 [MB] (12 MBps) [2024-11-20T17:58:46.545Z] Copying: 490/1024 [MB] (11 MBps) [2024-11-20T17:58:47.489Z] Copying: 504/1024 [MB] (13 MBps) [2024-11-20T17:58:48.433Z] Copying: 514/1024 [MB] (10 MBps) [2024-11-20T17:58:49.376Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-20T17:58:50.320Z] Copying: 542/1024 [MB] (17 MBps) [2024-11-20T17:58:51.264Z] Copying: 558/1024 [MB] (15 MBps) [2024-11-20T17:58:52.209Z] Copying: 573/1024 [MB] (14 MBps) [2024-11-20T17:58:53.155Z] Copying: 587/1024 [MB] (14 MBps) [2024-11-20T17:58:54.542Z] Copying: 606/1024 [MB] (18 MBps) [2024-11-20T17:58:55.485Z] Copying: 621/1024 [MB] (14 MBps) [2024-11-20T17:58:56.430Z] Copying: 635/1024 [MB] (14 MBps) [2024-11-20T17:58:57.389Z] Copying: 652/1024 [MB] (16 MBps) [2024-11-20T17:58:58.332Z] Copying: 669/1024 [MB] (17 MBps) [2024-11-20T17:58:59.275Z] Copying: 687/1024 [MB] (17 MBps) [2024-11-20T17:59:00.297Z] Copying: 705/1024 [MB] (18 MBps) [2024-11-20T17:59:01.240Z] Copying: 716/1024 [MB] (10 MBps) [2024-11-20T17:59:02.184Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-20T17:59:03.566Z] Copying: 739/1024 [MB] (11 MBps) [2024-11-20T17:59:04.507Z] Copying: 766/1024 [MB] (27 MBps) [2024-11-20T17:59:05.451Z] Copying: 786/1024 [MB] (19 MBps) [2024-11-20T17:59:06.396Z] Copying: 808/1024 [MB] (22 MBps) [2024-11-20T17:59:07.342Z] Copying: 827/1024 [MB] (19 MBps) [2024-11-20T17:59:08.284Z] Copying: 844/1024 [MB] (16 MBps) [2024-11-20T17:59:09.228Z] Copying: 862/1024 [MB] (17 MBps) [2024-11-20T17:59:10.170Z] Copying: 879/1024 [MB] (17 MBps) [2024-11-20T17:59:11.556Z] Copying: 896/1024 [MB] (17 MBps) [2024-11-20T17:59:12.500Z] Copying: 909/1024 [MB] (12 MBps) [2024-11-20T17:59:13.444Z] Copying: 925/1024 [MB] (15 MBps) [2024-11-20T17:59:14.388Z] Copying: 941/1024 [MB] (15 MBps) [2024-11-20T17:59:15.331Z] Copying: 961/1024 [MB] (20 MBps) [2024-11-20T17:59:16.274Z] Copying: 980/1024 [MB] (19 MBps) [2024-11-20T17:59:17.217Z] Copying: 1002/1024 [MB] (21 MBps) [2024-11-20T17:59:17.217Z] Copying: 1023/1024 [MB] (20 MBps) [2024-11-20T17:59:17.479Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-20 17:59:17.458844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.939 [2024-11-20 17:59:17.458955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:53.939 [2024-11-20 17:59:17.458975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:53.939 [2024-11-20 17:59:17.458985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.939 [2024-11-20 17:59:17.459014] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:53.939 [2024-11-20 17:59:17.462338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.939 [2024-11-20 17:59:17.462384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:53.939 [2024-11-20 17:59:17.462405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.306 ms 00:28:53.939 [2024-11-20 17:59:17.462414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.939 [2024-11-20 17:59:17.462661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.939 [2024-11-20 17:59:17.462672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:53.939 [2024-11-20 17:59:17.462683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:28:53.939 [2024-11-20 17:59:17.462691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.939 [2024-11-20 17:59:17.466174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.939 [2024-11-20 17:59:17.466197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:53.939 [2024-11-20 17:59:17.466207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.468 ms 00:28:53.939 [2024-11-20 17:59:17.466217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.939 [2024-11-20 17:59:17.472840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.939 [2024-11-20 17:59:17.472890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:53.939 [2024-11-20 17:59:17.472904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.600 ms 00:28:53.939 [2024-11-20 17:59:17.472914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.201 [2024-11-20 17:59:17.501619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.201 [2024-11-20 17:59:17.501670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:54.201 [2024-11-20 17:59:17.501684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.630 ms 00:28:54.201 [2024-11-20 17:59:17.501692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.201 [2024-11-20 17:59:17.519707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.201 [2024-11-20 17:59:17.519756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:54.201 [2024-11-20 17:59:17.519769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.964 ms 00:28:54.201 [2024-11-20 17:59:17.519778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.201 [2024-11-20 17:59:17.524352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.201 [2024-11-20 17:59:17.524405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:54.201 [2024-11-20 17:59:17.524417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.519 ms 00:28:54.201 [2024-11-20 17:59:17.524426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.201 [2024-11-20 17:59:17.550504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.201 [2024-11-20 17:59:17.550548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:54.202 [2024-11-20 17:59:17.550560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.062 ms 00:28:54.202 [2024-11-20 17:59:17.550568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.202 [2024-11-20 17:59:17.576219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.202 [2024-11-20 17:59:17.576277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:54.202 [2024-11-20 17:59:17.576289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.605 ms 00:28:54.202 [2024-11-20 17:59:17.576296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.202 [2024-11-20 17:59:17.601234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.202 [2024-11-20 17:59:17.601278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:54.202 [2024-11-20 17:59:17.601290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.889 ms 00:28:54.202 [2024-11-20 17:59:17.601298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.202 [2024-11-20 17:59:17.625920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.202 [2024-11-20 17:59:17.626108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:54.202 [2024-11-20 17:59:17.626130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.534 ms 00:28:54.202 [2024-11-20 17:59:17.626138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.202 [2024-11-20 17:59:17.626292] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:54.202 [2024-11-20 17:59:17.626325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:54.202 [2024-11-20 17:59:17.626345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:54.202 [2024-11-20 17:59:17.626355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:54.202 [2024-11-20 17:59:17.626902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.626998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:54.203 [2024-11-20 17:59:17.627154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:54.203 [2024-11-20 17:59:17.627167] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d9b3bbe-d8c2-4135-b83e-ca00f5a592b5 00:28:54.203 [2024-11-20 17:59:17.627175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:54.203 [2024-11-20 17:59:17.627183] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:54.203 [2024-11-20 17:59:17.627190] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:54.203 [2024-11-20 17:59:17.627199] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:54.203 [2024-11-20 17:59:17.627207] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:54.203 [2024-11-20 17:59:17.627217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:54.203 [2024-11-20 17:59:17.627232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:54.203 [2024-11-20 17:59:17.627239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:54.203 [2024-11-20 17:59:17.627245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:54.203 [2024-11-20 17:59:17.627253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.203 [2024-11-20 17:59:17.627262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:54.203 [2024-11-20 17:59:17.627272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:28:54.203 [2024-11-20 17:59:17.627280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.203 [2024-11-20 17:59:17.640719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.203 [2024-11-20 17:59:17.640761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:54.203 [2024-11-20 17:59:17.640772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.416 ms 00:28:54.203 [2024-11-20 17:59:17.640781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.203 [2024-11-20 17:59:17.641218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.203 [2024-11-20 17:59:17.641236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:54.203 [2024-11-20 17:59:17.641254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:28:54.203 [2024-11-20 17:59:17.641262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.203 [2024-11-20 17:59:17.677708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.203 [2024-11-20 17:59:17.677755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:54.203 [2024-11-20 17:59:17.677767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.203 [2024-11-20 17:59:17.677777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.203 [2024-11-20 17:59:17.677838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.203 [2024-11-20 17:59:17.677849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:54.203 [2024-11-20 17:59:17.677863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.203 [2024-11-20 17:59:17.677885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.203 [2024-11-20 17:59:17.677971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.203 [2024-11-20 17:59:17.677983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:54.203 [2024-11-20 17:59:17.677993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.203 [2024-11-20 17:59:17.678003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.203 [2024-11-20 17:59:17.678019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.203 [2024-11-20 17:59:17.678030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:54.203 [2024-11-20 17:59:17.678040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.203 [2024-11-20 17:59:17.678052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.464 [2024-11-20 17:59:17.761469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.464 [2024-11-20 17:59:17.761522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:54.464 [2024-11-20 17:59:17.761535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.464 [2024-11-20 17:59:17.761543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.464 [2024-11-20 17:59:17.830129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.464 [2024-11-20 17:59:17.830184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:54.464 [2024-11-20 17:59:17.830203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.464 [2024-11-20 17:59:17.830212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.464 [2024-11-20 17:59:17.830276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.464 [2024-11-20 17:59:17.830286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:54.464 [2024-11-20 17:59:17.830295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.464 [2024-11-20 17:59:17.830304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.464 [2024-11-20 17:59:17.830366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.464 [2024-11-20 17:59:17.830377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:54.464 [2024-11-20 17:59:17.830386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.464 [2024-11-20 17:59:17.830395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.464 [2024-11-20 17:59:17.830496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.464 [2024-11-20 17:59:17.830506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:54.465 [2024-11-20 17:59:17.830515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.465 [2024-11-20 17:59:17.830524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.465 [2024-11-20 17:59:17.830557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.465 [2024-11-20 17:59:17.830567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:54.465 [2024-11-20 17:59:17.830576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.465 [2024-11-20 17:59:17.830586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.465 [2024-11-20 17:59:17.830632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.465 [2024-11-20 17:59:17.830642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:54.465 [2024-11-20 17:59:17.830650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.465 [2024-11-20 17:59:17.830659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.465 [2024-11-20 17:59:17.830706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:54.465 [2024-11-20 17:59:17.830717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:54.465 [2024-11-20 17:59:17.830726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:54.465 [2024-11-20 17:59:17.830734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.465 [2024-11-20 17:59:17.830867] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.993 ms, result 0 00:28:55.038 00:28:55.038 00:28:55.299 17:59:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:57.216 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:57.216 17:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:57.216 17:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:57.216 17:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:57.216 17:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:57.478 17:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:57.478 17:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:57.478 17:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:57.478 Process with pid 79891 is not found 00:28:57.478 17:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79891 00:28:57.478 17:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79891 ']' 00:28:57.478 17:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 79891 00:28:57.478 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79891) - No such process 00:28:57.478 17:59:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 79891 is not found' 00:28:57.478 17:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:57.738 Remove shared memory files 00:28:57.739 17:59:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:57.739 17:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:57.739 17:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:57.739 17:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:57.739 17:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:57.739 17:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:57.739 17:59:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:57.739 ************************************ 00:28:57.739 END TEST ftl_dirty_shutdown 00:28:57.739 ************************************ 00:28:57.739 00:28:57.739 real 3m58.409s 00:28:57.739 user 4m18.864s 00:28:57.739 sys 0m25.166s 00:28:57.739 17:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.739 17:59:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:57.739 17:59:21 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:57.739 17:59:21 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:57.739 17:59:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:57.739 17:59:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:58.000 ************************************ 00:28:58.000 START TEST ftl_upgrade_shutdown 00:28:58.000 ************************************ 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:58.000 * Looking for test storage... 00:28:58.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.000 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:58.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.000 --rc genhtml_branch_coverage=1 00:28:58.000 --rc genhtml_function_coverage=1 00:28:58.000 --rc genhtml_legend=1 00:28:58.000 --rc geninfo_all_blocks=1 00:28:58.000 --rc geninfo_unexecuted_blocks=1 00:28:58.000 00:28:58.000 ' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:58.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.001 --rc genhtml_branch_coverage=1 00:28:58.001 --rc genhtml_function_coverage=1 00:28:58.001 --rc genhtml_legend=1 00:28:58.001 --rc geninfo_all_blocks=1 00:28:58.001 --rc geninfo_unexecuted_blocks=1 00:28:58.001 00:28:58.001 ' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:58.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.001 --rc genhtml_branch_coverage=1 00:28:58.001 --rc genhtml_function_coverage=1 00:28:58.001 --rc genhtml_legend=1 00:28:58.001 --rc geninfo_all_blocks=1 00:28:58.001 --rc geninfo_unexecuted_blocks=1 00:28:58.001 00:28:58.001 ' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:58.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.001 --rc genhtml_branch_coverage=1 00:28:58.001 --rc genhtml_function_coverage=1 00:28:58.001 --rc genhtml_legend=1 00:28:58.001 --rc geninfo_all_blocks=1 00:28:58.001 --rc geninfo_unexecuted_blocks=1 00:28:58.001 00:28:58.001 ' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82450 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82450 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82450 ']' 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.001 17:59:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:58.262 [2024-11-20 17:59:21.555478] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:28:58.262 [2024-11-20 17:59:21.555809] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82450 ] 00:28:58.262 [2024-11-20 17:59:21.718215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.524 [2024-11-20 17:59:21.848337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:59.097 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:59.358 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:59.358 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:59.358 17:59:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:59.358 17:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:28:59.358 17:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:59.358 17:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:59.358 17:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:59.358 17:59:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:59.618 { 00:28:59.618 "name": "basen1", 00:28:59.618 "aliases": [ 00:28:59.618 "7c0a1a6e-8527-4339-a672-e8c315964ae5" 00:28:59.618 ], 00:28:59.618 "product_name": "NVMe disk", 00:28:59.618 "block_size": 4096, 00:28:59.618 "num_blocks": 1310720, 00:28:59.618 "uuid": "7c0a1a6e-8527-4339-a672-e8c315964ae5", 00:28:59.618 "numa_id": -1, 00:28:59.618 "assigned_rate_limits": { 00:28:59.618 "rw_ios_per_sec": 0, 00:28:59.618 "rw_mbytes_per_sec": 0, 00:28:59.618 "r_mbytes_per_sec": 0, 00:28:59.618 "w_mbytes_per_sec": 0 00:28:59.618 }, 00:28:59.618 "claimed": true, 00:28:59.618 "claim_type": "read_many_write_one", 00:28:59.618 "zoned": false, 00:28:59.618 "supported_io_types": { 00:28:59.618 "read": true, 00:28:59.618 "write": true, 00:28:59.618 "unmap": true, 00:28:59.618 "flush": true, 00:28:59.618 "reset": true, 00:28:59.618 "nvme_admin": true, 00:28:59.618 "nvme_io": true, 00:28:59.618 "nvme_io_md": false, 00:28:59.618 "write_zeroes": true, 00:28:59.618 "zcopy": false, 00:28:59.618 "get_zone_info": false, 00:28:59.618 "zone_management": false, 00:28:59.618 "zone_append": false, 00:28:59.618 "compare": true, 00:28:59.618 "compare_and_write": false, 00:28:59.618 "abort": true, 00:28:59.618 "seek_hole": false, 00:28:59.618 "seek_data": false, 00:28:59.618 "copy": true, 00:28:59.618 "nvme_iov_md": false 00:28:59.618 }, 00:28:59.618 "driver_specific": { 00:28:59.618 "nvme": [ 00:28:59.618 { 00:28:59.618 "pci_address": "0000:00:11.0", 00:28:59.618 "trid": { 00:28:59.618 "trtype": "PCIe", 00:28:59.618 "traddr": "0000:00:11.0" 00:28:59.618 }, 00:28:59.618 "ctrlr_data": { 00:28:59.618 "cntlid": 0, 00:28:59.618 "vendor_id": "0x1b36", 00:28:59.618 "model_number": "QEMU NVMe Ctrl", 00:28:59.618 "serial_number": "12341", 00:28:59.618 "firmware_revision": "8.0.0", 00:28:59.618 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:59.618 "oacs": { 00:28:59.618 "security": 0, 00:28:59.618 "format": 1, 00:28:59.618 "firmware": 0, 00:28:59.618 "ns_manage": 1 00:28:59.618 }, 00:28:59.618 "multi_ctrlr": false, 00:28:59.618 "ana_reporting": false 00:28:59.618 }, 00:28:59.618 "vs": { 00:28:59.618 "nvme_version": "1.4" 00:28:59.618 }, 00:28:59.618 "ns_data": { 00:28:59.618 "id": 1, 00:28:59.618 "can_share": false 00:28:59.618 } 00:28:59.618 } 00:28:59.618 ], 00:28:59.618 "mp_policy": "active_passive" 00:28:59.618 } 00:28:59.618 } 00:28:59.618 ]' 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:59.618 17:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:59.879 17:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=f2480f98-15db-4616-aeff-a420225768db 00:28:59.879 17:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:59.879 17:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f2480f98-15db-4616-aeff-a420225768db 00:29:00.140 17:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:00.401 17:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=4cea333b-9e20-4a4a-805b-445f763d63aa 00:29:00.401 17:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 4cea333b-9e20-4a4a-805b-445f763d63aa 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=b0cb9b7d-b439-40dc-845e-6ba51ce31ec0 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z b0cb9b7d-b439-40dc-845e-6ba51ce31ec0 ]] 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 b0cb9b7d-b439-40dc-845e-6ba51ce31ec0 5120 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=b0cb9b7d-b439-40dc-845e-6ba51ce31ec0 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size b0cb9b7d-b439-40dc-845e-6ba51ce31ec0 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b0cb9b7d-b439-40dc-845e-6ba51ce31ec0 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:00.663 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0cb9b7d-b439-40dc-845e-6ba51ce31ec0 00:29:00.924 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:00.924 { 00:29:00.924 "name": "b0cb9b7d-b439-40dc-845e-6ba51ce31ec0", 00:29:00.924 "aliases": [ 00:29:00.924 "lvs/basen1p0" 00:29:00.924 ], 00:29:00.924 "product_name": "Logical Volume", 00:29:00.924 "block_size": 4096, 00:29:00.924 "num_blocks": 5242880, 00:29:00.924 "uuid": "b0cb9b7d-b439-40dc-845e-6ba51ce31ec0", 00:29:00.924 "assigned_rate_limits": { 00:29:00.924 "rw_ios_per_sec": 0, 00:29:00.924 "rw_mbytes_per_sec": 0, 00:29:00.924 "r_mbytes_per_sec": 0, 00:29:00.924 "w_mbytes_per_sec": 0 00:29:00.924 }, 00:29:00.924 "claimed": false, 00:29:00.924 "zoned": false, 00:29:00.924 "supported_io_types": { 00:29:00.924 "read": true, 00:29:00.924 "write": true, 00:29:00.924 "unmap": true, 00:29:00.924 "flush": false, 00:29:00.924 "reset": true, 00:29:00.924 "nvme_admin": false, 00:29:00.924 "nvme_io": false, 00:29:00.924 "nvme_io_md": false, 00:29:00.924 "write_zeroes": true, 00:29:00.924 "zcopy": false, 00:29:00.924 "get_zone_info": false, 00:29:00.924 "zone_management": false, 00:29:00.924 "zone_append": false, 00:29:00.924 "compare": false, 00:29:00.924 "compare_and_write": false, 00:29:00.924 "abort": false, 00:29:00.924 "seek_hole": true, 00:29:00.924 "seek_data": true, 00:29:00.924 "copy": false, 00:29:00.924 "nvme_iov_md": false 00:29:00.924 }, 00:29:00.924 "driver_specific": { 00:29:00.924 "lvol": { 00:29:00.924 "lvol_store_uuid": "4cea333b-9e20-4a4a-805b-445f763d63aa", 00:29:00.924 "base_bdev": "basen1", 00:29:00.924 "thin_provision": true, 00:29:00.924 "num_allocated_clusters": 0, 00:29:00.924 "snapshot": false, 00:29:00.924 "clone": false, 00:29:00.924 "esnap_clone": false 00:29:00.924 } 00:29:00.924 } 00:29:00.924 } 00:29:00.924 ]' 00:29:00.924 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:00.924 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:00.924 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:00.924 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:00.924 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:00.924 17:59:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:00.924 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:00.924 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:00.924 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:01.185 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:01.185 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:01.185 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:01.447 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:01.447 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:01.447 17:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d b0cb9b7d-b439-40dc-845e-6ba51ce31ec0 -c cachen1p0 --l2p_dram_limit 2 00:29:01.447 [2024-11-20 17:59:24.978997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.447 [2024-11-20 17:59:24.979030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:01.447 [2024-11-20 17:59:24.979043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:01.447 [2024-11-20 17:59:24.979049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.447 [2024-11-20 17:59:24.979089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.447 [2024-11-20 17:59:24.979097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:01.447 [2024-11-20 17:59:24.979104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:01.447 [2024-11-20 17:59:24.979110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.447 [2024-11-20 17:59:24.979126] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:01.447 [2024-11-20 17:59:24.979682] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:01.447 [2024-11-20 17:59:24.979699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.447 [2024-11-20 17:59:24.979705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:01.447 [2024-11-20 17:59:24.979713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.574 ms 00:29:01.447 [2024-11-20 17:59:24.979719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.447 [2024-11-20 17:59:24.979745] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b63b0810-ad5f-4ecc-9d05-1a07de58f153 00:29:01.447 [2024-11-20 17:59:24.980743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.447 [2024-11-20 17:59:24.980767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:01.447 [2024-11-20 17:59:24.980775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:01.447 [2024-11-20 17:59:24.980783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.708 [2024-11-20 17:59:24.985612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.708 [2024-11-20 17:59:24.985641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:01.708 [2024-11-20 17:59:24.985649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.773 ms 00:29:01.708 [2024-11-20 17:59:24.985657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.708 [2024-11-20 17:59:24.985686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.708 [2024-11-20 17:59:24.985694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:01.708 [2024-11-20 17:59:24.985701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:01.708 [2024-11-20 17:59:24.985709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.708 [2024-11-20 17:59:24.985744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.708 [2024-11-20 17:59:24.985753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:01.708 [2024-11-20 17:59:24.985759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:01.708 [2024-11-20 17:59:24.985769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.708 [2024-11-20 17:59:24.985787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:01.708 [2024-11-20 17:59:24.988627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.708 [2024-11-20 17:59:24.988652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:01.708 [2024-11-20 17:59:24.988662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.842 ms 00:29:01.708 [2024-11-20 17:59:24.988668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.708 [2024-11-20 17:59:24.988689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.708 [2024-11-20 17:59:24.988695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:01.708 [2024-11-20 17:59:24.988703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:01.708 [2024-11-20 17:59:24.988710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.708 [2024-11-20 17:59:24.988729] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:01.708 [2024-11-20 17:59:24.988833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:01.708 [2024-11-20 17:59:24.988846] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:01.708 [2024-11-20 17:59:24.988854] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:01.708 [2024-11-20 17:59:24.988864] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:01.708 [2024-11-20 17:59:24.988881] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:01.708 [2024-11-20 17:59:24.988888] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:01.708 [2024-11-20 17:59:24.988894] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:01.708 [2024-11-20 17:59:24.988903] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:01.708 [2024-11-20 17:59:24.988909] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:01.708 [2024-11-20 17:59:24.988915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.708 [2024-11-20 17:59:24.988922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:01.708 [2024-11-20 17:59:24.988929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:29:01.708 [2024-11-20 17:59:24.988936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.708 [2024-11-20 17:59:24.989001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.708 [2024-11-20 17:59:24.989008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:01.708 [2024-11-20 17:59:24.989016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:29:01.708 [2024-11-20 17:59:24.989026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.708 [2024-11-20 17:59:24.989105] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:01.708 [2024-11-20 17:59:24.989112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:01.708 [2024-11-20 17:59:24.989120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:01.708 [2024-11-20 17:59:24.989126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:01.708 [2024-11-20 17:59:24.989139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:01.708 [2024-11-20 17:59:24.989151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:01.708 [2024-11-20 17:59:24.989157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:01.708 [2024-11-20 17:59:24.989163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:01.708 [2024-11-20 17:59:24.989175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:01.708 [2024-11-20 17:59:24.989182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:01.708 [2024-11-20 17:59:24.989194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:01.708 [2024-11-20 17:59:24.989200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:01.708 [2024-11-20 17:59:24.989214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:01.708 [2024-11-20 17:59:24.989221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:01.708 [2024-11-20 17:59:24.989235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:01.708 [2024-11-20 17:59:24.989240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:01.708 [2024-11-20 17:59:24.989247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:01.708 [2024-11-20 17:59:24.989251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:01.708 [2024-11-20 17:59:24.989257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:01.708 [2024-11-20 17:59:24.989263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:01.708 [2024-11-20 17:59:24.989269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:01.708 [2024-11-20 17:59:24.989274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:01.708 [2024-11-20 17:59:24.989281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:01.708 [2024-11-20 17:59:24.989286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:01.708 [2024-11-20 17:59:24.989292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:01.708 [2024-11-20 17:59:24.989297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:01.708 [2024-11-20 17:59:24.989304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:01.708 [2024-11-20 17:59:24.989309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:01.708 [2024-11-20 17:59:24.989320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:01.708 [2024-11-20 17:59:24.989326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:01.708 [2024-11-20 17:59:24.989337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:01.708 [2024-11-20 17:59:24.989353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:01.708 [2024-11-20 17:59:24.989359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989364] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:01.708 [2024-11-20 17:59:24.989371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:01.708 [2024-11-20 17:59:24.989376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:01.708 [2024-11-20 17:59:24.989383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:01.708 [2024-11-20 17:59:24.989390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:01.708 [2024-11-20 17:59:24.989399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:01.708 [2024-11-20 17:59:24.989404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:01.708 [2024-11-20 17:59:24.989411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:01.708 [2024-11-20 17:59:24.989415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:01.708 [2024-11-20 17:59:24.989422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:01.708 [2024-11-20 17:59:24.989430] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:01.708 [2024-11-20 17:59:24.989439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:01.708 [2024-11-20 17:59:24.989447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:01.708 [2024-11-20 17:59:24.989453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:01.708 [2024-11-20 17:59:24.989459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:01.708 [2024-11-20 17:59:24.989466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:01.708 [2024-11-20 17:59:24.989471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:01.708 [2024-11-20 17:59:24.989478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:01.709 [2024-11-20 17:59:24.989484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:01.709 [2024-11-20 17:59:24.989490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:01.709 [2024-11-20 17:59:24.989495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:01.709 [2024-11-20 17:59:24.989504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:01.709 [2024-11-20 17:59:24.989509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:01.709 [2024-11-20 17:59:24.989516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:01.709 [2024-11-20 17:59:24.989521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:01.709 [2024-11-20 17:59:24.989529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:01.709 [2024-11-20 17:59:24.989534] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:01.709 [2024-11-20 17:59:24.989542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:01.709 [2024-11-20 17:59:24.989548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:01.709 [2024-11-20 17:59:24.989555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:01.709 [2024-11-20 17:59:24.989561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:01.709 [2024-11-20 17:59:24.989568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:01.709 [2024-11-20 17:59:24.989574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.709 [2024-11-20 17:59:24.989580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:01.709 [2024-11-20 17:59:24.989586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.522 ms 00:29:01.709 [2024-11-20 17:59:24.989593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.709 [2024-11-20 17:59:24.989622] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:01.709 [2024-11-20 17:59:24.989633] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:05.917 [2024-11-20 17:59:29.245508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.917 [2024-11-20 17:59:29.245793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:05.917 [2024-11-20 17:59:29.245820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4255.872 ms 00:29:05.917 [2024-11-20 17:59:29.245833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.917 [2024-11-20 17:59:29.276680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.917 [2024-11-20 17:59:29.276951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:05.917 [2024-11-20 17:59:29.276974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.582 ms 00:29:05.917 [2024-11-20 17:59:29.276985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.917 [2024-11-20 17:59:29.277074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.917 [2024-11-20 17:59:29.277088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:05.917 [2024-11-20 17:59:29.277097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:05.918 [2024-11-20 17:59:29.277115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.918 [2024-11-20 17:59:29.312278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.918 [2024-11-20 17:59:29.312487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:05.918 [2024-11-20 17:59:29.312507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.126 ms 00:29:05.918 [2024-11-20 17:59:29.312518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.918 [2024-11-20 17:59:29.312556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.918 [2024-11-20 17:59:29.312573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:05.918 [2024-11-20 17:59:29.312582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:05.918 [2024-11-20 17:59:29.312592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.918 [2024-11-20 17:59:29.313198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.918 [2024-11-20 17:59:29.313229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:05.918 [2024-11-20 17:59:29.313242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.534 ms 00:29:05.918 [2024-11-20 17:59:29.313252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.918 [2024-11-20 17:59:29.313307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.918 [2024-11-20 17:59:29.313320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:05.918 [2024-11-20 17:59:29.313334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:05.918 [2024-11-20 17:59:29.313347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.918 [2024-11-20 17:59:29.330454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.918 [2024-11-20 17:59:29.330503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:05.918 [2024-11-20 17:59:29.330514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.081 ms 00:29:05.918 [2024-11-20 17:59:29.330524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.918 [2024-11-20 17:59:29.358379] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:05.918 [2024-11-20 17:59:29.359792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.918 [2024-11-20 17:59:29.359843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:05.918 [2024-11-20 17:59:29.359861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.177 ms 00:29:05.918 [2024-11-20 17:59:29.359898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.918 [2024-11-20 17:59:29.391547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.918 [2024-11-20 17:59:29.391743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:05.918 [2024-11-20 17:59:29.391773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.596 ms 00:29:05.918 [2024-11-20 17:59:29.391783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.918 [2024-11-20 17:59:29.391908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.918 [2024-11-20 17:59:29.391925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:05.918 [2024-11-20 17:59:29.391941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:29:05.918 [2024-11-20 17:59:29.391951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.918 [2024-11-20 17:59:29.417598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.918 [2024-11-20 17:59:29.417645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:05.918 [2024-11-20 17:59:29.417661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.589 ms 00:29:05.918 [2024-11-20 17:59:29.417670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.918 [2024-11-20 17:59:29.443068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.918 [2024-11-20 17:59:29.443112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:05.918 [2024-11-20 17:59:29.443126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.339 ms 00:29:05.918 [2024-11-20 17:59:29.443135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.918 [2024-11-20 17:59:29.443754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.918 [2024-11-20 17:59:29.443779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:05.918 [2024-11-20 17:59:29.443791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.571 ms 00:29:05.918 [2024-11-20 17:59:29.443802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.178 [2024-11-20 17:59:29.535069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.178 [2024-11-20 17:59:29.535120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:06.178 [2024-11-20 17:59:29.535141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 91.220 ms 00:29:06.178 [2024-11-20 17:59:29.535150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.178 [2024-11-20 17:59:29.562342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.178 [2024-11-20 17:59:29.562393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:06.178 [2024-11-20 17:59:29.562417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.090 ms 00:29:06.178 [2024-11-20 17:59:29.562426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.178 [2024-11-20 17:59:29.587972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.178 [2024-11-20 17:59:29.588017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:06.178 [2024-11-20 17:59:29.588032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.489 ms 00:29:06.178 [2024-11-20 17:59:29.588039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.178 [2024-11-20 17:59:29.613986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.178 [2024-11-20 17:59:29.614033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:06.178 [2024-11-20 17:59:29.614048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.893 ms 00:29:06.178 [2024-11-20 17:59:29.614056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.178 [2024-11-20 17:59:29.614112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.178 [2024-11-20 17:59:29.614123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:06.178 [2024-11-20 17:59:29.614137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:06.178 [2024-11-20 17:59:29.614146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.178 [2024-11-20 17:59:29.614237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.178 [2024-11-20 17:59:29.614249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:06.178 [2024-11-20 17:59:29.614263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:06.178 [2024-11-20 17:59:29.614271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.178 [2024-11-20 17:59:29.615504] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4636.019 ms, result 0 00:29:06.178 { 00:29:06.178 "name": "ftl", 00:29:06.178 "uuid": "b63b0810-ad5f-4ecc-9d05-1a07de58f153" 00:29:06.178 } 00:29:06.178 17:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:06.439 [2024-11-20 17:59:29.834570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.439 17:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:06.701 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:06.963 [2024-11-20 17:59:30.279061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:06.963 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:06.963 [2024-11-20 17:59:30.500526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:07.225 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:07.486 Fill FTL, iteration 1 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:07.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=82583 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 82583 /var/tmp/spdk.tgt.sock 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82583 ']' 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.486 17:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.486 [2024-11-20 17:59:30.948989] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:29:07.486 [2024-11-20 17:59:30.949373] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82583 ] 00:29:07.747 [2024-11-20 17:59:31.109484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.747 [2024-11-20 17:59:31.232307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.691 17:59:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.691 17:59:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:08.691 17:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:08.691 ftln1 00:29:08.691 17:59:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:08.691 17:59:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 82583 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82583 ']' 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82583 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82583 00:29:08.953 killing process with pid 82583 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82583' 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82583 00:29:08.953 17:59:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82583 00:29:10.362 17:59:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:10.362 17:59:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:10.362 [2024-11-20 17:59:33.871860] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:29:10.362 [2024-11-20 17:59:33.872484] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82627 ] 00:29:10.657 [2024-11-20 17:59:34.029770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.657 [2024-11-20 17:59:34.105391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.057  [2024-11-20T17:59:36.538Z] Copying: 247/1024 [MB] (247 MBps) [2024-11-20T17:59:37.482Z] Copying: 491/1024 [MB] (244 MBps) [2024-11-20T17:59:38.426Z] Copying: 692/1024 [MB] (201 MBps) [2024-11-20T17:59:39.373Z] Copying: 870/1024 [MB] (178 MBps) [2024-11-20T17:59:40.319Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:29:16.779 00:29:16.779 Calculate MD5 checksum, iteration 1 00:29:16.779 17:59:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:16.779 17:59:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:16.779 17:59:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:16.779 17:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:16.779 17:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:16.779 17:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:16.779 17:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:16.779 17:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:16.779 [2024-11-20 17:59:40.285296] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:29:16.779 [2024-11-20 17:59:40.285468] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82696 ] 00:29:17.040 [2024-11-20 17:59:40.451044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.301 [2024-11-20 17:59:40.595053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.687  [2024-11-20T17:59:42.794Z] Copying: 569/1024 [MB] (569 MBps) [2024-11-20T17:59:43.363Z] Copying: 1024/1024 [MB] (average 569 MBps) 00:29:19.823 00:29:19.823 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:19.823 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:22.369 17:59:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:22.369 Fill FTL, iteration 2 00:29:22.369 17:59:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2cf536be2aa30a119cff53b36d9eba6c 00:29:22.369 17:59:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:22.369 17:59:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:22.369 17:59:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:22.369 17:59:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:22.369 17:59:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:22.369 17:59:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:22.369 17:59:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:22.369 17:59:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:22.369 17:59:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:22.369 [2024-11-20 17:59:45.395776] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:29:22.369 [2024-11-20 17:59:45.396337] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82752 ] 00:29:22.369 [2024-11-20 17:59:45.555493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.369 [2024-11-20 17:59:45.662848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.756  [2024-11-20T17:59:48.235Z] Copying: 212/1024 [MB] (212 MBps) [2024-11-20T17:59:49.171Z] Copying: 425/1024 [MB] (213 MBps) [2024-11-20T17:59:50.110Z] Copying: 654/1024 [MB] (229 MBps) [2024-11-20T17:59:50.677Z] Copying: 891/1024 [MB] (237 MBps) [2024-11-20T17:59:51.246Z] Copying: 1024/1024 [MB] (average 224 MBps) 00:29:27.706 00:29:27.706 17:59:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:27.706 17:59:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:27.706 Calculate MD5 checksum, iteration 2 00:29:27.706 17:59:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:27.706 17:59:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:27.706 17:59:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:27.706 17:59:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:27.706 17:59:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:27.706 17:59:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:27.966 [2024-11-20 17:59:51.261453] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:29:27.966 [2024-11-20 17:59:51.261660] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82819 ] 00:29:27.966 [2024-11-20 17:59:51.411714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.966 [2024-11-20 17:59:51.498284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.870  [2024-11-20T17:59:53.670Z] Copying: 627/1024 [MB] (627 MBps) [2024-11-20T17:59:54.609Z] Copying: 1024/1024 [MB] (average 616 MBps) 00:29:31.069 00:29:31.069 17:59:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:31.069 17:59:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:33.613 17:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:33.613 17:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=cdc7767743c18f49bee2ea703705a17f 00:29:33.613 17:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:33.613 17:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:33.613 17:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:33.613 [2024-11-20 17:59:56.752061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.613 [2024-11-20 17:59:56.752101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:33.613 [2024-11-20 17:59:56.752112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:33.613 [2024-11-20 17:59:56.752119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.613 [2024-11-20 17:59:56.752136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.613 [2024-11-20 17:59:56.752143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:33.613 [2024-11-20 17:59:56.752152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:33.613 [2024-11-20 17:59:56.752158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.613 [2024-11-20 17:59:56.752173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.613 [2024-11-20 17:59:56.752179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:33.613 [2024-11-20 17:59:56.752186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:33.613 [2024-11-20 17:59:56.752191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.613 [2024-11-20 17:59:56.752238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.169 ms, result 0 00:29:33.613 true 00:29:33.613 17:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:33.613 { 00:29:33.613 "name": "ftl", 00:29:33.613 "properties": [ 00:29:33.613 { 00:29:33.613 "name": "superblock_version", 00:29:33.613 "value": 5, 00:29:33.613 "read-only": true 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "name": "base_device", 00:29:33.613 "bands": [ 00:29:33.613 { 00:29:33.613 "id": 0, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 1, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 2, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 3, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 4, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 5, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 6, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 7, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 8, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 9, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 10, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 11, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 12, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 13, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 14, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 15, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 16, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 17, 00:29:33.613 "state": "FREE", 00:29:33.613 "validity": 0.0 00:29:33.613 } 00:29:33.613 ], 00:29:33.613 "read-only": true 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "name": "cache_device", 00:29:33.613 "type": "bdev", 00:29:33.613 "chunks": [ 00:29:33.613 { 00:29:33.613 "id": 0, 00:29:33.613 "state": "INACTIVE", 00:29:33.613 "utilization": 0.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 1, 00:29:33.613 "state": "CLOSED", 00:29:33.613 "utilization": 1.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 2, 00:29:33.613 "state": "CLOSED", 00:29:33.613 "utilization": 1.0 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 3, 00:29:33.613 "state": "OPEN", 00:29:33.613 "utilization": 0.001953125 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "id": 4, 00:29:33.613 "state": "OPEN", 00:29:33.613 "utilization": 0.0 00:29:33.613 } 00:29:33.613 ], 00:29:33.613 "read-only": true 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "name": "verbose_mode", 00:29:33.613 "value": true, 00:29:33.613 "unit": "", 00:29:33.613 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:33.613 }, 00:29:33.613 { 00:29:33.613 "name": "prep_upgrade_on_shutdown", 00:29:33.613 "value": false, 00:29:33.613 "unit": "", 00:29:33.613 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:33.613 } 00:29:33.613 ] 00:29:33.613 } 00:29:33.613 17:59:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:33.613 [2024-11-20 17:59:57.076339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.613 [2024-11-20 17:59:57.076371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:33.613 [2024-11-20 17:59:57.076379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:33.613 [2024-11-20 17:59:57.076385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.613 [2024-11-20 17:59:57.076401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.613 [2024-11-20 17:59:57.076407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:33.613 [2024-11-20 17:59:57.076412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:33.613 [2024-11-20 17:59:57.076418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.613 [2024-11-20 17:59:57.076432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:33.613 [2024-11-20 17:59:57.076438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:33.613 [2024-11-20 17:59:57.076443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:33.613 [2024-11-20 17:59:57.076449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:33.613 [2024-11-20 17:59:57.076490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.141 ms, result 0 00:29:33.613 true 00:29:33.613 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:33.613 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:33.613 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:33.875 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:33.875 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:33.875 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:34.136 [2024-11-20 17:59:57.452662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.136 [2024-11-20 17:59:57.452797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:34.137 [2024-11-20 17:59:57.452848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:34.137 [2024-11-20 17:59:57.452866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.137 [2024-11-20 17:59:57.452908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.137 [2024-11-20 17:59:57.452925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:34.137 [2024-11-20 17:59:57.452942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:34.137 [2024-11-20 17:59:57.452956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.137 [2024-11-20 17:59:57.452980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.137 [2024-11-20 17:59:57.452995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:34.137 [2024-11-20 17:59:57.453011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:34.137 [2024-11-20 17:59:57.453054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.137 [2024-11-20 17:59:57.453111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.436 ms, result 0 00:29:34.137 true 00:29:34.137 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:34.137 { 00:29:34.137 "name": "ftl", 00:29:34.137 "properties": [ 00:29:34.137 { 00:29:34.137 "name": "superblock_version", 00:29:34.137 "value": 5, 00:29:34.137 "read-only": true 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "name": "base_device", 00:29:34.137 "bands": [ 00:29:34.137 { 00:29:34.137 "id": 0, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 1, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 2, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 3, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 4, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 5, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 6, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 7, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 8, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 9, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 10, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 11, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 12, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 13, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 14, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 15, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 16, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 17, 00:29:34.137 "state": "FREE", 00:29:34.137 "validity": 0.0 00:29:34.137 } 00:29:34.137 ], 00:29:34.137 "read-only": true 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "name": "cache_device", 00:29:34.137 "type": "bdev", 00:29:34.137 "chunks": [ 00:29:34.137 { 00:29:34.137 "id": 0, 00:29:34.137 "state": "INACTIVE", 00:29:34.137 "utilization": 0.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 1, 00:29:34.137 "state": "CLOSED", 00:29:34.137 "utilization": 1.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 2, 00:29:34.137 "state": "CLOSED", 00:29:34.137 "utilization": 1.0 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 3, 00:29:34.137 "state": "OPEN", 00:29:34.137 "utilization": 0.001953125 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "id": 4, 00:29:34.137 "state": "OPEN", 00:29:34.137 "utilization": 0.0 00:29:34.137 } 00:29:34.137 ], 00:29:34.137 "read-only": true 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "name": "verbose_mode", 00:29:34.137 "value": true, 00:29:34.137 "unit": "", 00:29:34.137 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:34.137 }, 00:29:34.137 { 00:29:34.137 "name": "prep_upgrade_on_shutdown", 00:29:34.137 "value": true, 00:29:34.137 "unit": "", 00:29:34.137 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:34.137 } 00:29:34.137 ] 00:29:34.137 } 00:29:34.137 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:34.137 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82450 ]] 00:29:34.137 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82450 00:29:34.137 17:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82450 ']' 00:29:34.137 17:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82450 00:29:34.137 17:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:34.137 17:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.137 17:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82450 00:29:34.399 17:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:34.399 killing process with pid 82450 00:29:34.399 17:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:34.399 17:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82450' 00:29:34.399 17:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82450 00:29:34.399 17:59:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82450 00:29:34.971 [2024-11-20 17:59:58.213170] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:34.971 [2024-11-20 17:59:58.225159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.971 [2024-11-20 17:59:58.225192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:34.971 [2024-11-20 17:59:58.225202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:34.971 [2024-11-20 17:59:58.225209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.971 [2024-11-20 17:59:58.225227] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:34.971 [2024-11-20 17:59:58.227369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.971 [2024-11-20 17:59:58.227393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:34.971 [2024-11-20 17:59:58.227401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.131 ms 00:29:34.971 [2024-11-20 17:59:58.227408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.114 [2024-11-20 18:00:06.387064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.114 [2024-11-20 18:00:06.387106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:43.114 [2024-11-20 18:00:06.387117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8159.614 ms 00:29:43.114 [2024-11-20 18:00:06.387127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.114 [2024-11-20 18:00:06.388187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.114 [2024-11-20 18:00:06.388201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:43.114 [2024-11-20 18:00:06.388209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.047 ms 00:29:43.114 [2024-11-20 18:00:06.388215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.114 [2024-11-20 18:00:06.389110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.114 [2024-11-20 18:00:06.389127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:43.114 [2024-11-20 18:00:06.389134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:29:43.114 [2024-11-20 18:00:06.389144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.114 [2024-11-20 18:00:06.397081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.114 [2024-11-20 18:00:06.397105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:43.114 [2024-11-20 18:00:06.397111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.908 ms 00:29:43.114 [2024-11-20 18:00:06.397118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.114 [2024-11-20 18:00:06.402373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.114 [2024-11-20 18:00:06.402399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:43.114 [2024-11-20 18:00:06.402408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.230 ms 00:29:43.114 [2024-11-20 18:00:06.402414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.114 [2024-11-20 18:00:06.402469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.115 [2024-11-20 18:00:06.402476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:43.115 [2024-11-20 18:00:06.402486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:43.115 [2024-11-20 18:00:06.402492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.409962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.115 [2024-11-20 18:00:06.409984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:43.115 [2024-11-20 18:00:06.409991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.459 ms 00:29:43.115 [2024-11-20 18:00:06.409997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.417918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.115 [2024-11-20 18:00:06.417939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:43.115 [2024-11-20 18:00:06.417946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.896 ms 00:29:43.115 [2024-11-20 18:00:06.417951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.425010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.115 [2024-11-20 18:00:06.425036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:43.115 [2024-11-20 18:00:06.425045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.033 ms 00:29:43.115 [2024-11-20 18:00:06.425053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.432626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.115 [2024-11-20 18:00:06.432653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:43.115 [2024-11-20 18:00:06.432660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.513 ms 00:29:43.115 [2024-11-20 18:00:06.432666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.432691] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:43.115 [2024-11-20 18:00:06.432702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:43.115 [2024-11-20 18:00:06.432710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:43.115 [2024-11-20 18:00:06.432723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:43.115 [2024-11-20 18:00:06.432729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:43.115 [2024-11-20 18:00:06.432817] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:43.115 [2024-11-20 18:00:06.432823] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b63b0810-ad5f-4ecc-9d05-1a07de58f153 00:29:43.115 [2024-11-20 18:00:06.432829] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:43.115 [2024-11-20 18:00:06.432835] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:43.115 [2024-11-20 18:00:06.432841] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:43.115 [2024-11-20 18:00:06.432847] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:43.115 [2024-11-20 18:00:06.432852] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:43.115 [2024-11-20 18:00:06.432860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:43.115 [2024-11-20 18:00:06.432865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:43.115 [2024-11-20 18:00:06.432883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:43.115 [2024-11-20 18:00:06.432899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:43.115 [2024-11-20 18:00:06.432905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.115 [2024-11-20 18:00:06.432913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:43.115 [2024-11-20 18:00:06.432919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:29:43.115 [2024-11-20 18:00:06.432925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.442713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.115 [2024-11-20 18:00:06.442737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:43.115 [2024-11-20 18:00:06.442745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.768 ms 00:29:43.115 [2024-11-20 18:00:06.442755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.443039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.115 [2024-11-20 18:00:06.443050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:43.115 [2024-11-20 18:00:06.443057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:29:43.115 [2024-11-20 18:00:06.443062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.476157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.476180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:43.115 [2024-11-20 18:00:06.476192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.115 [2024-11-20 18:00:06.476198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.476220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.476227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:43.115 [2024-11-20 18:00:06.476233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.115 [2024-11-20 18:00:06.476239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.476286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.476294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:43.115 [2024-11-20 18:00:06.476300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.115 [2024-11-20 18:00:06.476309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.476321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.476328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:43.115 [2024-11-20 18:00:06.476334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.115 [2024-11-20 18:00:06.476340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.536273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.536298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:43.115 [2024-11-20 18:00:06.536311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.115 [2024-11-20 18:00:06.536317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.585146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.585174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:43.115 [2024-11-20 18:00:06.585181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.115 [2024-11-20 18:00:06.585187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.585249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.585257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:43.115 [2024-11-20 18:00:06.585263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.115 [2024-11-20 18:00:06.585269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.585304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.585311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:43.115 [2024-11-20 18:00:06.585317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.115 [2024-11-20 18:00:06.585323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.585392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.585399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:43.115 [2024-11-20 18:00:06.585405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.115 [2024-11-20 18:00:06.585411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.585433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.585443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:43.115 [2024-11-20 18:00:06.585449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.115 [2024-11-20 18:00:06.585454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.585482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.585488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:43.115 [2024-11-20 18:00:06.585494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.115 [2024-11-20 18:00:06.585500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.115 [2024-11-20 18:00:06.585536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:43.115 [2024-11-20 18:00:06.585549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:43.116 [2024-11-20 18:00:06.585555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:43.116 [2024-11-20 18:00:06.585560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.116 [2024-11-20 18:00:06.585650] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8360.453 ms, result 0 00:29:48.482 18:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:48.482 18:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:48.482 18:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:48.482 18:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:48.482 18:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:48.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.482 18:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83007 00:29:48.483 18:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:48.483 18:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83007 00:29:48.483 18:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83007 ']' 00:29:48.483 18:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:48.483 18:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.483 18:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.483 18:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.483 18:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.483 18:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:48.483 [2024-11-20 18:00:11.119656] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:29:48.483 [2024-11-20 18:00:11.120011] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83007 ] 00:29:48.483 [2024-11-20 18:00:11.277811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.483 [2024-11-20 18:00:11.365099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.483 [2024-11-20 18:00:12.019723] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:48.483 [2024-11-20 18:00:12.019775] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:48.743 [2024-11-20 18:00:12.162723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-11-20 18:00:12.162760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:48.743 [2024-11-20 18:00:12.162770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:48.743 [2024-11-20 18:00:12.162777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-11-20 18:00:12.162818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-11-20 18:00:12.162827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:48.743 [2024-11-20 18:00:12.162833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:48.743 [2024-11-20 18:00:12.162838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-11-20 18:00:12.162852] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:48.743 [2024-11-20 18:00:12.163373] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:48.743 [2024-11-20 18:00:12.163391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-11-20 18:00:12.163397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:48.743 [2024-11-20 18:00:12.163403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:29:48.743 [2024-11-20 18:00:12.163409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-11-20 18:00:12.164396] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:48.743 [2024-11-20 18:00:12.173804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-11-20 18:00:12.173835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:48.743 [2024-11-20 18:00:12.173846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.410 ms 00:29:48.743 [2024-11-20 18:00:12.173852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-11-20 18:00:12.173903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-11-20 18:00:12.173912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:48.743 [2024-11-20 18:00:12.173918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:48.743 [2024-11-20 18:00:12.173923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-11-20 18:00:12.178263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-11-20 18:00:12.178288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:48.743 [2024-11-20 18:00:12.178296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.291 ms 00:29:48.743 [2024-11-20 18:00:12.178301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-11-20 18:00:12.178341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-11-20 18:00:12.178348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:48.743 [2024-11-20 18:00:12.178355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:48.743 [2024-11-20 18:00:12.178360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-11-20 18:00:12.178393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-11-20 18:00:12.178400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:48.743 [2024-11-20 18:00:12.178408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:48.743 [2024-11-20 18:00:12.178414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-11-20 18:00:12.178429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:48.743 [2024-11-20 18:00:12.181057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-11-20 18:00:12.181083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:48.743 [2024-11-20 18:00:12.181090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.631 ms 00:29:48.743 [2024-11-20 18:00:12.181098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-11-20 18:00:12.181120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.743 [2024-11-20 18:00:12.181126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:48.743 [2024-11-20 18:00:12.181133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:48.743 [2024-11-20 18:00:12.181138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.743 [2024-11-20 18:00:12.181153] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:48.743 [2024-11-20 18:00:12.181167] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:48.743 [2024-11-20 18:00:12.181194] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:48.743 [2024-11-20 18:00:12.181205] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:48.743 [2024-11-20 18:00:12.181283] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:48.743 [2024-11-20 18:00:12.181291] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:48.743 [2024-11-20 18:00:12.181299] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:48.743 [2024-11-20 18:00:12.181307] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:48.743 [2024-11-20 18:00:12.181318] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:48.744 [2024-11-20 18:00:12.181326] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:48.744 [2024-11-20 18:00:12.181331] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:48.744 [2024-11-20 18:00:12.181337] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:48.744 [2024-11-20 18:00:12.181343] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:48.744 [2024-11-20 18:00:12.181348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.744 [2024-11-20 18:00:12.181354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:48.744 [2024-11-20 18:00:12.181359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.197 ms 00:29:48.744 [2024-11-20 18:00:12.181365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.744 [2024-11-20 18:00:12.181429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.744 [2024-11-20 18:00:12.181440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:48.744 [2024-11-20 18:00:12.181446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:48.744 [2024-11-20 18:00:12.181453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.744 [2024-11-20 18:00:12.181528] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:48.744 [2024-11-20 18:00:12.181539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:48.744 [2024-11-20 18:00:12.181546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:48.744 [2024-11-20 18:00:12.181551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:48.744 [2024-11-20 18:00:12.181562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:48.744 [2024-11-20 18:00:12.181572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:48.744 [2024-11-20 18:00:12.181577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:48.744 [2024-11-20 18:00:12.181582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:48.744 [2024-11-20 18:00:12.181592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:48.744 [2024-11-20 18:00:12.181597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:48.744 [2024-11-20 18:00:12.181608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:48.744 [2024-11-20 18:00:12.181613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:48.744 [2024-11-20 18:00:12.181626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:48.744 [2024-11-20 18:00:12.181630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:48.744 [2024-11-20 18:00:12.181640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:48.744 [2024-11-20 18:00:12.181645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:48.744 [2024-11-20 18:00:12.181650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:48.744 [2024-11-20 18:00:12.181655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:48.744 [2024-11-20 18:00:12.181660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:48.744 [2024-11-20 18:00:12.181670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:48.744 [2024-11-20 18:00:12.181675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:48.744 [2024-11-20 18:00:12.181680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:48.744 [2024-11-20 18:00:12.181684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:48.744 [2024-11-20 18:00:12.181689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:48.744 [2024-11-20 18:00:12.181694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:48.744 [2024-11-20 18:00:12.181699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:48.744 [2024-11-20 18:00:12.181704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:48.744 [2024-11-20 18:00:12.181709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:48.744 [2024-11-20 18:00:12.181718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:48.744 [2024-11-20 18:00:12.181723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:48.744 [2024-11-20 18:00:12.181733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:48.744 [2024-11-20 18:00:12.181747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:48.744 [2024-11-20 18:00:12.181752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181757] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:48.744 [2024-11-20 18:00:12.181763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:48.744 [2024-11-20 18:00:12.181768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:48.744 [2024-11-20 18:00:12.181774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:48.744 [2024-11-20 18:00:12.181781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:48.744 [2024-11-20 18:00:12.181788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:48.744 [2024-11-20 18:00:12.181793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:48.744 [2024-11-20 18:00:12.181798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:48.744 [2024-11-20 18:00:12.181803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:48.744 [2024-11-20 18:00:12.181808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:48.744 [2024-11-20 18:00:12.181814] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:48.744 [2024-11-20 18:00:12.181822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:48.744 [2024-11-20 18:00:12.181828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:48.744 [2024-11-20 18:00:12.181833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:48.744 [2024-11-20 18:00:12.181839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:48.744 [2024-11-20 18:00:12.181844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:48.744 [2024-11-20 18:00:12.181849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:48.744 [2024-11-20 18:00:12.181855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:48.744 [2024-11-20 18:00:12.181860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:48.744 [2024-11-20 18:00:12.181866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:48.744 [2024-11-20 18:00:12.181881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:48.744 [2024-11-20 18:00:12.181887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:48.744 [2024-11-20 18:00:12.181893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:48.744 [2024-11-20 18:00:12.181898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:48.744 [2024-11-20 18:00:12.181903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:48.744 [2024-11-20 18:00:12.181908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:48.744 [2024-11-20 18:00:12.181913] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:48.744 [2024-11-20 18:00:12.181919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:48.744 [2024-11-20 18:00:12.181925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:48.744 [2024-11-20 18:00:12.181931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:48.744 [2024-11-20 18:00:12.181936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:48.744 [2024-11-20 18:00:12.181941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:48.744 [2024-11-20 18:00:12.181947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.744 [2024-11-20 18:00:12.181952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:48.744 [2024-11-20 18:00:12.181958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:29:48.744 [2024-11-20 18:00:12.181963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.744 [2024-11-20 18:00:12.181995] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:48.744 [2024-11-20 18:00:12.182006] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:54.035 [2024-11-20 18:00:17.108821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.108886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:54.035 [2024-11-20 18:00:17.108902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4926.812 ms 00:29:54.035 [2024-11-20 18:00:17.108909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.134231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.134274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:54.035 [2024-11-20 18:00:17.134286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.120 ms 00:29:54.035 [2024-11-20 18:00:17.134294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.134369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.134383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:54.035 [2024-11-20 18:00:17.134392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:54.035 [2024-11-20 18:00:17.134399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.164998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.165036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:54.035 [2024-11-20 18:00:17.165047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.563 ms 00:29:54.035 [2024-11-20 18:00:17.165058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.165084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.165092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:54.035 [2024-11-20 18:00:17.165100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:54.035 [2024-11-20 18:00:17.165107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.165466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.165489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:54.035 [2024-11-20 18:00:17.165498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.314 ms 00:29:54.035 [2024-11-20 18:00:17.165505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.165546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.165554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:54.035 [2024-11-20 18:00:17.165563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:54.035 [2024-11-20 18:00:17.165569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.179596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.179630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:54.035 [2024-11-20 18:00:17.179639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.006 ms 00:29:54.035 [2024-11-20 18:00:17.179647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.203426] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:54.035 [2024-11-20 18:00:17.203466] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:54.035 [2024-11-20 18:00:17.203479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.203487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:54.035 [2024-11-20 18:00:17.203497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.720 ms 00:29:54.035 [2024-11-20 18:00:17.203503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.218579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.218624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:54.035 [2024-11-20 18:00:17.218635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.037 ms 00:29:54.035 [2024-11-20 18:00:17.218644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.229831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.229863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:54.035 [2024-11-20 18:00:17.229880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.148 ms 00:29:54.035 [2024-11-20 18:00:17.229887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.241231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.241263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:54.035 [2024-11-20 18:00:17.241272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.313 ms 00:29:54.035 [2024-11-20 18:00:17.241279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.035 [2024-11-20 18:00:17.241891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.035 [2024-11-20 18:00:17.241917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:54.035 [2024-11-20 18:00:17.241926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.525 ms 00:29:54.035 [2024-11-20 18:00:17.241933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.036 [2024-11-20 18:00:17.301257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.036 [2024-11-20 18:00:17.301306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:54.036 [2024-11-20 18:00:17.301317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 59.305 ms 00:29:54.036 [2024-11-20 18:00:17.301325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.036 [2024-11-20 18:00:17.311753] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:54.036 [2024-11-20 18:00:17.312368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.036 [2024-11-20 18:00:17.312397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:54.036 [2024-11-20 18:00:17.312407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.021 ms 00:29:54.036 [2024-11-20 18:00:17.312414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.036 [2024-11-20 18:00:17.312477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.036 [2024-11-20 18:00:17.312489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:54.036 [2024-11-20 18:00:17.312498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:54.036 [2024-11-20 18:00:17.312506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.036 [2024-11-20 18:00:17.312558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.036 [2024-11-20 18:00:17.312568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:54.036 [2024-11-20 18:00:17.312576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:54.036 [2024-11-20 18:00:17.312585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.036 [2024-11-20 18:00:17.312605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.036 [2024-11-20 18:00:17.312613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:54.036 [2024-11-20 18:00:17.312624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:54.036 [2024-11-20 18:00:17.312631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.036 [2024-11-20 18:00:17.312662] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:54.036 [2024-11-20 18:00:17.312671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.036 [2024-11-20 18:00:17.312679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:54.036 [2024-11-20 18:00:17.312687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:54.036 [2024-11-20 18:00:17.312694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.036 [2024-11-20 18:00:17.335938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.036 [2024-11-20 18:00:17.335977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:54.036 [2024-11-20 18:00:17.335988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.226 ms 00:29:54.036 [2024-11-20 18:00:17.335995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.036 [2024-11-20 18:00:17.336051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.036 [2024-11-20 18:00:17.336059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:54.036 [2024-11-20 18:00:17.336067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:54.036 [2024-11-20 18:00:17.336075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.036 [2024-11-20 18:00:17.337289] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5174.110 ms, result 0 00:29:54.036 [2024-11-20 18:00:17.352296] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.036 [2024-11-20 18:00:17.368281] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:54.036 [2024-11-20 18:00:17.376405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:54.036 18:00:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.036 18:00:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:54.036 18:00:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:54.036 18:00:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:54.036 18:00:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:54.297 [2024-11-20 18:00:17.616522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.297 [2024-11-20 18:00:17.616581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:54.297 [2024-11-20 18:00:17.616594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:54.297 [2024-11-20 18:00:17.616606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.297 [2024-11-20 18:00:17.616630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.297 [2024-11-20 18:00:17.616640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:54.297 [2024-11-20 18:00:17.616649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:54.297 [2024-11-20 18:00:17.616657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.297 [2024-11-20 18:00:17.616677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.297 [2024-11-20 18:00:17.616686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:54.297 [2024-11-20 18:00:17.616694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:54.297 [2024-11-20 18:00:17.616703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.297 [2024-11-20 18:00:17.616764] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.234 ms, result 0 00:29:54.297 true 00:29:54.297 18:00:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:54.558 { 00:29:54.558 "name": "ftl", 00:29:54.558 "properties": [ 00:29:54.558 { 00:29:54.558 "name": "superblock_version", 00:29:54.558 "value": 5, 00:29:54.558 "read-only": true 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "name": "base_device", 00:29:54.558 "bands": [ 00:29:54.558 { 00:29:54.558 "id": 0, 00:29:54.558 "state": "CLOSED", 00:29:54.558 "validity": 1.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 1, 00:29:54.558 "state": "CLOSED", 00:29:54.558 "validity": 1.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 2, 00:29:54.558 "state": "CLOSED", 00:29:54.558 "validity": 0.007843137254901933 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 3, 00:29:54.558 "state": "FREE", 00:29:54.558 "validity": 0.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 4, 00:29:54.558 "state": "FREE", 00:29:54.558 "validity": 0.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 5, 00:29:54.558 "state": "FREE", 00:29:54.558 "validity": 0.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 6, 00:29:54.558 "state": "FREE", 00:29:54.558 "validity": 0.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 7, 00:29:54.558 "state": "FREE", 00:29:54.558 "validity": 0.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 8, 00:29:54.558 "state": "FREE", 00:29:54.558 "validity": 0.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 9, 00:29:54.558 "state": "FREE", 00:29:54.558 "validity": 0.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 10, 00:29:54.558 "state": "FREE", 00:29:54.558 "validity": 0.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 11, 00:29:54.558 "state": "FREE", 00:29:54.558 "validity": 0.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 12, 00:29:54.558 "state": "FREE", 00:29:54.558 "validity": 0.0 00:29:54.558 }, 00:29:54.558 { 00:29:54.558 "id": 13, 00:29:54.558 "state": "FREE", 00:29:54.559 "validity": 0.0 00:29:54.559 }, 00:29:54.559 { 00:29:54.559 "id": 14, 00:29:54.559 "state": "FREE", 00:29:54.559 "validity": 0.0 00:29:54.559 }, 00:29:54.559 { 00:29:54.559 "id": 15, 00:29:54.559 "state": "FREE", 00:29:54.559 "validity": 0.0 00:29:54.559 }, 00:29:54.559 { 00:29:54.559 "id": 16, 00:29:54.559 "state": "FREE", 00:29:54.559 "validity": 0.0 00:29:54.559 }, 00:29:54.559 { 00:29:54.559 "id": 17, 00:29:54.559 "state": "FREE", 00:29:54.559 "validity": 0.0 00:29:54.559 } 00:29:54.559 ], 00:29:54.559 "read-only": true 00:29:54.559 }, 00:29:54.559 { 00:29:54.559 "name": "cache_device", 00:29:54.559 "type": "bdev", 00:29:54.559 "chunks": [ 00:29:54.559 { 00:29:54.559 "id": 0, 00:29:54.559 "state": "INACTIVE", 00:29:54.559 "utilization": 0.0 00:29:54.559 }, 00:29:54.559 { 00:29:54.559 "id": 1, 00:29:54.559 "state": "OPEN", 00:29:54.559 "utilization": 0.0 00:29:54.559 }, 00:29:54.559 { 00:29:54.559 "id": 2, 00:29:54.559 "state": "OPEN", 00:29:54.559 "utilization": 0.0 00:29:54.559 }, 00:29:54.559 { 00:29:54.559 "id": 3, 00:29:54.559 "state": "FREE", 00:29:54.559 "utilization": 0.0 00:29:54.559 }, 00:29:54.559 { 00:29:54.559 "id": 4, 00:29:54.559 "state": "FREE", 00:29:54.559 "utilization": 0.0 00:29:54.559 } 00:29:54.559 ], 00:29:54.559 "read-only": true 00:29:54.559 }, 00:29:54.559 { 00:29:54.559 "name": "verbose_mode", 00:29:54.559 "value": true, 00:29:54.559 "unit": "", 00:29:54.559 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:54.559 }, 00:29:54.559 { 00:29:54.559 "name": "prep_upgrade_on_shutdown", 00:29:54.559 "value": false, 00:29:54.559 "unit": "", 00:29:54.559 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:54.559 } 00:29:54.559 ] 00:29:54.559 } 00:29:54.559 18:00:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:54.559 18:00:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:54.559 18:00:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:54.559 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:54.559 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:54.559 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:54.559 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:54.559 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:54.820 Validate MD5 checksum, iteration 1 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:54.820 18:00:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:54.820 [2024-11-20 18:00:18.346510] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:29:54.820 [2024-11-20 18:00:18.346629] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83101 ] 00:29:55.080 [2024-11-20 18:00:18.506561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.080 [2024-11-20 18:00:18.613659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.992  [2024-11-20T18:00:21.473Z] Copying: 458/1024 [MB] (458 MBps) [2024-11-20T18:00:21.473Z] Copying: 898/1024 [MB] (440 MBps) [2024-11-20T18:00:22.850Z] Copying: 1024/1024 [MB] (average 454 MBps) 00:29:59.310 00:29:59.310 18:00:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:59.310 18:00:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2cf536be2aa30a119cff53b36d9eba6c 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2cf536be2aa30a119cff53b36d9eba6c != \2\c\f\5\3\6\b\e\2\a\a\3\0\a\1\1\9\c\f\f\5\3\b\3\6\d\9\e\b\a\6\c ]] 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:01.853 Validate MD5 checksum, iteration 2 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:01.853 18:00:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:01.853 [2024-11-20 18:00:24.920485] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:30:01.853 [2024-11-20 18:00:24.920597] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83176 ] 00:30:01.853 [2024-11-20 18:00:25.079177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.853 [2024-11-20 18:00:25.181741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.237  [2024-11-20T18:00:27.713Z] Copying: 456/1024 [MB] (456 MBps) [2024-11-20T18:00:27.972Z] Copying: 950/1024 [MB] (494 MBps) [2024-11-20T18:00:28.913Z] Copying: 1024/1024 [MB] (average 483 MBps) 00:30:05.373 00:30:05.373 18:00:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:05.373 18:00:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cdc7767743c18f49bee2ea703705a17f 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cdc7767743c18f49bee2ea703705a17f != \c\d\c\7\7\6\7\7\4\3\c\1\8\f\4\9\b\e\e\2\e\a\7\0\3\7\0\5\a\1\7\f ]] 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83007 ]] 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83007 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83243 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83243 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83243 ']' 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.922 18:00:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:07.922 [2024-11-20 18:00:31.061452] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:30:07.922 [2024-11-20 18:00:31.061583] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83243 ] 00:30:07.922 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83007 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:07.922 [2024-11-20 18:00:31.227651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.922 [2024-11-20 18:00:31.378739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.867 [2024-11-20 18:00:32.290005] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:08.867 [2024-11-20 18:00:32.290098] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:09.130 [2024-11-20 18:00:32.444795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.444856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:09.130 [2024-11-20 18:00:32.444891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:09.130 [2024-11-20 18:00:32.444901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.444976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.444987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:09.130 [2024-11-20 18:00:32.444998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:09.130 [2024-11-20 18:00:32.445006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.445033] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:09.130 [2024-11-20 18:00:32.445913] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:09.130 [2024-11-20 18:00:32.445963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.445973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:09.130 [2024-11-20 18:00:32.445984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.936 ms 00:30:09.130 [2024-11-20 18:00:32.445992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.446383] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:09.130 [2024-11-20 18:00:32.467175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.467232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:09.130 [2024-11-20 18:00:32.467247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.791 ms 00:30:09.130 [2024-11-20 18:00:32.467257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.477222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.477283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:09.130 [2024-11-20 18:00:32.477300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:30:09.130 [2024-11-20 18:00:32.477308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.477684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.477709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:09.130 [2024-11-20 18:00:32.477721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.277 ms 00:30:09.130 [2024-11-20 18:00:32.477731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.477796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.477808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:09.130 [2024-11-20 18:00:32.477818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:30:09.130 [2024-11-20 18:00:32.477827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.477855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.477864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:09.130 [2024-11-20 18:00:32.477894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:09.130 [2024-11-20 18:00:32.477903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.477930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:09.130 [2024-11-20 18:00:32.481464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.481507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:09.130 [2024-11-20 18:00:32.481518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.540 ms 00:30:09.130 [2024-11-20 18:00:32.481527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.481564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.481574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:09.130 [2024-11-20 18:00:32.481583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:09.130 [2024-11-20 18:00:32.481592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.481635] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:09.130 [2024-11-20 18:00:32.481661] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:09.130 [2024-11-20 18:00:32.481702] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:09.130 [2024-11-20 18:00:32.481725] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:09.130 [2024-11-20 18:00:32.481836] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:09.130 [2024-11-20 18:00:32.481849] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:09.130 [2024-11-20 18:00:32.481861] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:09.130 [2024-11-20 18:00:32.481890] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:09.130 [2024-11-20 18:00:32.481903] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:09.130 [2024-11-20 18:00:32.481913] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:09.130 [2024-11-20 18:00:32.481921] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:09.130 [2024-11-20 18:00:32.481931] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:09.130 [2024-11-20 18:00:32.481940] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:09.130 [2024-11-20 18:00:32.481949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.481959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:09.130 [2024-11-20 18:00:32.481969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:30:09.130 [2024-11-20 18:00:32.481978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.482065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.130 [2024-11-20 18:00:32.482075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:09.130 [2024-11-20 18:00:32.482083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:09.130 [2024-11-20 18:00:32.482091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.130 [2024-11-20 18:00:32.482196] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:09.130 [2024-11-20 18:00:32.482209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:09.130 [2024-11-20 18:00:32.482221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:09.130 [2024-11-20 18:00:32.482230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:09.130 [2024-11-20 18:00:32.482239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:09.130 [2024-11-20 18:00:32.482246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:09.130 [2024-11-20 18:00:32.482254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:09.131 [2024-11-20 18:00:32.482261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:09.131 [2024-11-20 18:00:32.482269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:09.131 [2024-11-20 18:00:32.482275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:09.131 [2024-11-20 18:00:32.482284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:09.131 [2024-11-20 18:00:32.482291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:09.131 [2024-11-20 18:00:32.482298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:09.131 [2024-11-20 18:00:32.482307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:09.131 [2024-11-20 18:00:32.482319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:09.131 [2024-11-20 18:00:32.482326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:09.131 [2024-11-20 18:00:32.482333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:09.131 [2024-11-20 18:00:32.482341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:09.131 [2024-11-20 18:00:32.482348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:09.131 [2024-11-20 18:00:32.482356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:09.131 [2024-11-20 18:00:32.482363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:09.131 [2024-11-20 18:00:32.482369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:09.131 [2024-11-20 18:00:32.482376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:09.131 [2024-11-20 18:00:32.482390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:09.131 [2024-11-20 18:00:32.482397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:09.131 [2024-11-20 18:00:32.482405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:09.131 [2024-11-20 18:00:32.482412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:09.131 [2024-11-20 18:00:32.482419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:09.131 [2024-11-20 18:00:32.482425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:09.131 [2024-11-20 18:00:32.482432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:09.131 [2024-11-20 18:00:32.482438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:09.131 [2024-11-20 18:00:32.482444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:09.131 [2024-11-20 18:00:32.482450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:09.131 [2024-11-20 18:00:32.482457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:09.131 [2024-11-20 18:00:32.482466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:09.131 [2024-11-20 18:00:32.482472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:09.131 [2024-11-20 18:00:32.482479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:09.131 [2024-11-20 18:00:32.482485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:09.131 [2024-11-20 18:00:32.482491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:09.131 [2024-11-20 18:00:32.482498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:09.131 [2024-11-20 18:00:32.482505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:09.131 [2024-11-20 18:00:32.482513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:09.131 [2024-11-20 18:00:32.482521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:09.131 [2024-11-20 18:00:32.482527] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:09.131 [2024-11-20 18:00:32.482534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:09.131 [2024-11-20 18:00:32.482541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:09.131 [2024-11-20 18:00:32.482551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:09.131 [2024-11-20 18:00:32.482560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:09.131 [2024-11-20 18:00:32.482568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:09.131 [2024-11-20 18:00:32.482575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:09.131 [2024-11-20 18:00:32.482583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:09.131 [2024-11-20 18:00:32.482589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:09.131 [2024-11-20 18:00:32.482596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:09.131 [2024-11-20 18:00:32.482606] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:09.131 [2024-11-20 18:00:32.482617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:09.131 [2024-11-20 18:00:32.482627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:09.131 [2024-11-20 18:00:32.482636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:09.131 [2024-11-20 18:00:32.482644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:09.131 [2024-11-20 18:00:32.482650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:09.131 [2024-11-20 18:00:32.482657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:09.131 [2024-11-20 18:00:32.482665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:09.131 [2024-11-20 18:00:32.482673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:09.131 [2024-11-20 18:00:32.482682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:09.131 [2024-11-20 18:00:32.482691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:09.131 [2024-11-20 18:00:32.482698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:09.131 [2024-11-20 18:00:32.482705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:09.131 [2024-11-20 18:00:32.482713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:09.131 [2024-11-20 18:00:32.482720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:09.131 [2024-11-20 18:00:32.482728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:09.131 [2024-11-20 18:00:32.482735] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:09.131 [2024-11-20 18:00:32.482744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:09.131 [2024-11-20 18:00:32.482756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:09.131 [2024-11-20 18:00:32.482765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:09.131 [2024-11-20 18:00:32.482772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:09.131 [2024-11-20 18:00:32.482779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:09.131 [2024-11-20 18:00:32.482787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.131 [2024-11-20 18:00:32.482794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:09.131 [2024-11-20 18:00:32.482801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.659 ms 00:30:09.131 [2024-11-20 18:00:32.482811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.131 [2024-11-20 18:00:32.517069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.131 [2024-11-20 18:00:32.517116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:09.131 [2024-11-20 18:00:32.517128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.204 ms 00:30:09.131 [2024-11-20 18:00:32.517138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.131 [2024-11-20 18:00:32.517186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.131 [2024-11-20 18:00:32.517196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:09.131 [2024-11-20 18:00:32.517205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:09.131 [2024-11-20 18:00:32.517214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.131 [2024-11-20 18:00:32.557659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.131 [2024-11-20 18:00:32.557713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:09.131 [2024-11-20 18:00:32.557726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.380 ms 00:30:09.131 [2024-11-20 18:00:32.557735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.131 [2024-11-20 18:00:32.557783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.131 [2024-11-20 18:00:32.557793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:09.131 [2024-11-20 18:00:32.557803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:09.131 [2024-11-20 18:00:32.557811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.131 [2024-11-20 18:00:32.557977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.131 [2024-11-20 18:00:32.557992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:09.131 [2024-11-20 18:00:32.558003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:30:09.131 [2024-11-20 18:00:32.558013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.131 [2024-11-20 18:00:32.558068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.131 [2024-11-20 18:00:32.558078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:09.131 [2024-11-20 18:00:32.558087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:09.131 [2024-11-20 18:00:32.558095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.131 [2024-11-20 18:00:32.579193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.131 [2024-11-20 18:00:32.579239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:09.131 [2024-11-20 18:00:32.579251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.070 ms 00:30:09.131 [2024-11-20 18:00:32.579264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.132 [2024-11-20 18:00:32.579390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.132 [2024-11-20 18:00:32.579402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:09.132 [2024-11-20 18:00:32.579413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:09.132 [2024-11-20 18:00:32.579424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.132 [2024-11-20 18:00:32.611961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.132 [2024-11-20 18:00:32.612044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:09.132 [2024-11-20 18:00:32.612061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.513 ms 00:30:09.132 [2024-11-20 18:00:32.612070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.132 [2024-11-20 18:00:32.622325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.132 [2024-11-20 18:00:32.622374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:09.132 [2024-11-20 18:00:32.622398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.553 ms 00:30:09.132 [2024-11-20 18:00:32.622407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.393 [2024-11-20 18:00:32.691490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.393 [2024-11-20 18:00:32.691538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:09.393 [2024-11-20 18:00:32.691558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 69.006 ms 00:30:09.393 [2024-11-20 18:00:32.691567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.393 [2024-11-20 18:00:32.691711] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:09.393 [2024-11-20 18:00:32.691842] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:09.393 [2024-11-20 18:00:32.691970] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:09.393 [2024-11-20 18:00:32.692079] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:09.393 [2024-11-20 18:00:32.692102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.393 [2024-11-20 18:00:32.692111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:09.393 [2024-11-20 18:00:32.692120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.491 ms 00:30:09.393 [2024-11-20 18:00:32.692128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.393 [2024-11-20 18:00:32.692203] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:09.393 [2024-11-20 18:00:32.692223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.393 [2024-11-20 18:00:32.692234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:09.393 [2024-11-20 18:00:32.692243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:09.393 [2024-11-20 18:00:32.692251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.393 [2024-11-20 18:00:32.706578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.393 [2024-11-20 18:00:32.706618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:09.393 [2024-11-20 18:00:32.706630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.303 ms 00:30:09.393 [2024-11-20 18:00:32.706639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.393 [2024-11-20 18:00:32.715070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.393 [2024-11-20 18:00:32.715104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:09.393 [2024-11-20 18:00:32.715114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:09.393 [2024-11-20 18:00:32.715122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.393 [2024-11-20 18:00:32.715205] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:09.393 [2024-11-20 18:00:32.715381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.393 [2024-11-20 18:00:32.715402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:09.393 [2024-11-20 18:00:32.715411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:30:09.393 [2024-11-20 18:00:32.715420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.336 [2024-11-20 18:00:33.638342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.336 [2024-11-20 18:00:33.638448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:10.336 [2024-11-20 18:00:33.638470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 921.999 ms 00:30:10.336 [2024-11-20 18:00:33.638480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.336 [2024-11-20 18:00:33.643739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.336 [2024-11-20 18:00:33.643814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:10.336 [2024-11-20 18:00:33.643829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.717 ms 00:30:10.336 [2024-11-20 18:00:33.643839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.336 [2024-11-20 18:00:33.644816] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:10.336 [2024-11-20 18:00:33.644894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.336 [2024-11-20 18:00:33.644905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:10.336 [2024-11-20 18:00:33.644918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.987 ms 00:30:10.336 [2024-11-20 18:00:33.644927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.336 [2024-11-20 18:00:33.644973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.336 [2024-11-20 18:00:33.644985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:10.336 [2024-11-20 18:00:33.644996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:10.336 [2024-11-20 18:00:33.645004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.336 [2024-11-20 18:00:33.645060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 929.851 ms, result 0 00:30:10.336 [2024-11-20 18:00:33.645110] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:10.336 [2024-11-20 18:00:33.645439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.336 [2024-11-20 18:00:33.645468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:10.336 [2024-11-20 18:00:33.645478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.331 ms 00:30:10.336 [2024-11-20 18:00:33.645488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.508009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.508079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:11.279 [2024-11-20 18:00:34.508093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 861.162 ms 00:30:11.279 [2024-11-20 18:00:34.508101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.512647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.512681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:11.279 [2024-11-20 18:00:34.512690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.448 ms 00:30:11.279 [2024-11-20 18:00:34.512698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.513493] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:11.279 [2024-11-20 18:00:34.513527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.513536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:11.279 [2024-11-20 18:00:34.513545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.803 ms 00:30:11.279 [2024-11-20 18:00:34.513553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.513582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.513591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:11.279 [2024-11-20 18:00:34.513599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:11.279 [2024-11-20 18:00:34.513607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.513642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 868.531 ms, result 0 00:30:11.279 [2024-11-20 18:00:34.513686] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:11.279 [2024-11-20 18:00:34.513698] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:11.279 [2024-11-20 18:00:34.513707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.513716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:11.279 [2024-11-20 18:00:34.513724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1798.519 ms 00:30:11.279 [2024-11-20 18:00:34.513732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.513760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.513769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:11.279 [2024-11-20 18:00:34.513782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:11.279 [2024-11-20 18:00:34.513789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.525644] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:11.279 [2024-11-20 18:00:34.525761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.525772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:11.279 [2024-11-20 18:00:34.525781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.956 ms 00:30:11.279 [2024-11-20 18:00:34.525789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.526499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.526522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:11.279 [2024-11-20 18:00:34.526534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.644 ms 00:30:11.279 [2024-11-20 18:00:34.526542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.528781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.528804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:11.279 [2024-11-20 18:00:34.528814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.222 ms 00:30:11.279 [2024-11-20 18:00:34.528823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.528860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.528877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:11.279 [2024-11-20 18:00:34.528885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:11.279 [2024-11-20 18:00:34.528898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.529002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.529013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:11.279 [2024-11-20 18:00:34.529021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:11.279 [2024-11-20 18:00:34.529029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.529050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.529057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:11.279 [2024-11-20 18:00:34.529065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:11.279 [2024-11-20 18:00:34.529072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.529104] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:11.279 [2024-11-20 18:00:34.529113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.529121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:11.279 [2024-11-20 18:00:34.529128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:11.279 [2024-11-20 18:00:34.529136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.529189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.279 [2024-11-20 18:00:34.529199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:11.279 [2024-11-20 18:00:34.529206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:11.279 [2024-11-20 18:00:34.529214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.279 [2024-11-20 18:00:34.530183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2084.894 ms, result 0 00:30:11.279 [2024-11-20 18:00:34.542564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.279 [2024-11-20 18:00:34.558554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:11.279 [2024-11-20 18:00:34.566971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:11.279 Validate MD5 checksum, iteration 1 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:11.279 18:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:11.279 [2024-11-20 18:00:34.667223] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:30:11.279 [2024-11-20 18:00:34.667347] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83283 ] 00:30:11.538 [2024-11-20 18:00:34.824612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.538 [2024-11-20 18:00:34.901852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.919  [2024-11-20T18:00:37.028Z] Copying: 613/1024 [MB] (613 MBps) [2024-11-20T18:00:38.404Z] Copying: 1024/1024 [MB] (average 632 MBps) 00:30:14.864 00:30:14.864 18:00:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:14.864 18:00:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2cf536be2aa30a119cff53b36d9eba6c 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2cf536be2aa30a119cff53b36d9eba6c != \2\c\f\5\3\6\b\e\2\a\a\3\0\a\1\1\9\c\f\f\5\3\b\3\6\d\9\e\b\a\6\c ]] 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:16.764 Validate MD5 checksum, iteration 2 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:16.764 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:16.764 [2024-11-20 18:00:40.209702] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:30:16.764 [2024-11-20 18:00:40.209819] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83350 ] 00:30:17.022 [2024-11-20 18:00:40.365553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.022 [2024-11-20 18:00:40.440313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.404  [2024-11-20T18:00:42.514Z] Copying: 631/1024 [MB] (631 MBps) [2024-11-20T18:00:47.788Z] Copying: 1024/1024 [MB] (average 625 MBps) 00:30:24.248 00:30:24.248 18:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:24.248 18:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cdc7767743c18f49bee2ea703705a17f 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cdc7767743c18f49bee2ea703705a17f != \c\d\c\7\7\6\7\7\4\3\c\1\8\f\4\9\b\e\e\2\e\a\7\0\3\7\0\5\a\1\7\f ]] 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83243 ]] 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83243 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83243 ']' 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83243 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83243 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:26.144 killing process with pid 83243 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83243' 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83243 00:30:26.144 18:00:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83243 00:30:26.711 [2024-11-20 18:00:49.947501] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:26.711 [2024-11-20 18:00:49.960206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.711 [2024-11-20 18:00:49.960243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:26.711 [2024-11-20 18:00:49.960255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:26.711 [2024-11-20 18:00:49.960263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.711 [2024-11-20 18:00:49.960283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:26.711 [2024-11-20 18:00:49.962426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.711 [2024-11-20 18:00:49.962451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:26.711 [2024-11-20 18:00:49.962464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.131 ms 00:30:26.711 [2024-11-20 18:00:49.962471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.711 [2024-11-20 18:00:49.962677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.711 [2024-11-20 18:00:49.962686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:26.711 [2024-11-20 18:00:49.962693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:30:26.711 [2024-11-20 18:00:49.962699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.711 [2024-11-20 18:00:49.964086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.711 [2024-11-20 18:00:49.964113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:26.711 [2024-11-20 18:00:49.964121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.374 ms 00:30:26.711 [2024-11-20 18:00:49.964127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.711 [2024-11-20 18:00:49.965008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.711 [2024-11-20 18:00:49.965028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:26.711 [2024-11-20 18:00:49.965035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.848 ms 00:30:26.711 [2024-11-20 18:00:49.965041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:49.972532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.712 [2024-11-20 18:00:49.972570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:26.712 [2024-11-20 18:00:49.972578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.463 ms 00:30:26.712 [2024-11-20 18:00:49.972589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:49.977424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.712 [2024-11-20 18:00:49.977453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:26.712 [2024-11-20 18:00:49.977462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.806 ms 00:30:26.712 [2024-11-20 18:00:49.977469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:49.977537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.712 [2024-11-20 18:00:49.977545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:26.712 [2024-11-20 18:00:49.977552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:26.712 [2024-11-20 18:00:49.977558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:49.985705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.712 [2024-11-20 18:00:49.985732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:26.712 [2024-11-20 18:00:49.985739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.129 ms 00:30:26.712 [2024-11-20 18:00:49.985745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:49.993945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.712 [2024-11-20 18:00:49.993970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:26.712 [2024-11-20 18:00:49.993977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.172 ms 00:30:26.712 [2024-11-20 18:00:49.993983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:50.001923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.712 [2024-11-20 18:00:50.001948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:26.712 [2024-11-20 18:00:50.001955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.913 ms 00:30:26.712 [2024-11-20 18:00:50.001961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:50.009816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.712 [2024-11-20 18:00:50.009842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:26.712 [2024-11-20 18:00:50.009849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.804 ms 00:30:26.712 [2024-11-20 18:00:50.009854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:50.009889] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:26.712 [2024-11-20 18:00:50.009903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:26.712 [2024-11-20 18:00:50.009910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:26.712 [2024-11-20 18:00:50.009917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:26.712 [2024-11-20 18:00:50.009924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.009995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.010001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.010008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:26.712 [2024-11-20 18:00:50.010015] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:26.712 [2024-11-20 18:00:50.010022] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b63b0810-ad5f-4ecc-9d05-1a07de58f153 00:30:26.712 [2024-11-20 18:00:50.010029] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:26.712 [2024-11-20 18:00:50.010034] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:26.712 [2024-11-20 18:00:50.010040] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:26.712 [2024-11-20 18:00:50.010046] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:26.712 [2024-11-20 18:00:50.010051] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:26.712 [2024-11-20 18:00:50.010057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:26.712 [2024-11-20 18:00:50.010063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:26.712 [2024-11-20 18:00:50.010068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:26.712 [2024-11-20 18:00:50.010073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:26.712 [2024-11-20 18:00:50.010080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.712 [2024-11-20 18:00:50.010090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:26.712 [2024-11-20 18:00:50.010097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:30:26.712 [2024-11-20 18:00:50.010103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:50.020382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.712 [2024-11-20 18:00:50.020409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:26.712 [2024-11-20 18:00:50.020418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.265 ms 00:30:26.712 [2024-11-20 18:00:50.020425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:50.020732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.712 [2024-11-20 18:00:50.020750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:26.712 [2024-11-20 18:00:50.020758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:30:26.712 [2024-11-20 18:00:50.020764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:50.057152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.712 [2024-11-20 18:00:50.057181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:26.712 [2024-11-20 18:00:50.057191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.712 [2024-11-20 18:00:50.057198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.712 [2024-11-20 18:00:50.057230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.712 [2024-11-20 18:00:50.057237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:26.712 [2024-11-20 18:00:50.057244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.712 [2024-11-20 18:00:50.057251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.713 [2024-11-20 18:00:50.057328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.713 [2024-11-20 18:00:50.057337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:26.713 [2024-11-20 18:00:50.057344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.713 [2024-11-20 18:00:50.057352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.713 [2024-11-20 18:00:50.057368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.713 [2024-11-20 18:00:50.057397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:26.713 [2024-11-20 18:00:50.057403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.713 [2024-11-20 18:00:50.057410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.713 [2024-11-20 18:00:50.121673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.713 [2024-11-20 18:00:50.121707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:26.713 [2024-11-20 18:00:50.121718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.713 [2024-11-20 18:00:50.121724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.713 [2024-11-20 18:00:50.173231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.713 [2024-11-20 18:00:50.173266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:26.713 [2024-11-20 18:00:50.173276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.713 [2024-11-20 18:00:50.173282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.713 [2024-11-20 18:00:50.173345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.713 [2024-11-20 18:00:50.173354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:26.713 [2024-11-20 18:00:50.173362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.713 [2024-11-20 18:00:50.173369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.713 [2024-11-20 18:00:50.173419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.713 [2024-11-20 18:00:50.173427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:26.713 [2024-11-20 18:00:50.173437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.713 [2024-11-20 18:00:50.173450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.713 [2024-11-20 18:00:50.173531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.713 [2024-11-20 18:00:50.173540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:26.713 [2024-11-20 18:00:50.173548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.713 [2024-11-20 18:00:50.173554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.713 [2024-11-20 18:00:50.173579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.713 [2024-11-20 18:00:50.173586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:26.713 [2024-11-20 18:00:50.173593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.713 [2024-11-20 18:00:50.173602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.713 [2024-11-20 18:00:50.173635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.713 [2024-11-20 18:00:50.173642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:26.713 [2024-11-20 18:00:50.173648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.713 [2024-11-20 18:00:50.173654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.713 [2024-11-20 18:00:50.173691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:26.713 [2024-11-20 18:00:50.173701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:26.713 [2024-11-20 18:00:50.173709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:26.713 [2024-11-20 18:00:50.173716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.713 [2024-11-20 18:00:50.173822] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 213.590 ms, result 0 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:27.649 Remove shared memory files 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83007 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:27.649 00:30:27.649 real 1m29.592s 00:30:27.649 user 2m0.640s 00:30:27.649 sys 0m21.411s 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.649 ************************************ 00:30:27.649 END TEST ftl_upgrade_shutdown 00:30:27.649 ************************************ 00:30:27.649 18:00:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:27.649 18:00:50 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:30:27.649 18:00:50 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:27.649 18:00:50 ftl -- ftl/ftl.sh@14 -- # killprocess 75167 00:30:27.649 18:00:50 ftl -- common/autotest_common.sh@954 -- # '[' -z 75167 ']' 00:30:27.649 18:00:50 ftl -- common/autotest_common.sh@958 -- # kill -0 75167 00:30:27.649 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75167) - No such process 00:30:27.649 Process with pid 75167 is not found 00:30:27.649 18:00:50 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75167 is not found' 00:30:27.649 18:00:50 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:27.649 18:00:50 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83491 00:30:27.649 18:00:50 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83491 00:30:27.649 18:00:50 ftl -- common/autotest_common.sh@835 -- # '[' -z 83491 ']' 00:30:27.649 18:00:50 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.649 18:00:50 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.649 18:00:50 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.649 18:00:50 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.649 18:00:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:27.649 18:00:50 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:27.649 [2024-11-20 18:00:51.001819] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization... 00:30:27.649 [2024-11-20 18:00:51.001951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83491 ] 00:30:27.649 [2024-11-20 18:00:51.158050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.908 [2024-11-20 18:00:51.244337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.475 18:00:51 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.475 18:00:51 ftl -- common/autotest_common.sh@868 -- # return 0 00:30:28.475 18:00:51 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:28.733 nvme0n1 00:30:28.734 18:00:52 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:28.734 18:00:52 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:28.734 18:00:52 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:28.992 18:00:52 ftl -- ftl/common.sh@28 -- # stores=4cea333b-9e20-4a4a-805b-445f763d63aa 00:30:28.992 18:00:52 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:28.992 18:00:52 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4cea333b-9e20-4a4a-805b-445f763d63aa 00:30:28.992 18:00:52 ftl -- ftl/ftl.sh@23 -- # killprocess 83491 00:30:28.992 18:00:52 ftl -- common/autotest_common.sh@954 -- # '[' -z 83491 ']' 00:30:28.992 18:00:52 ftl -- common/autotest_common.sh@958 -- # kill -0 83491 00:30:28.992 18:00:52 ftl -- common/autotest_common.sh@959 -- # uname 00:30:28.992 18:00:52 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:28.992 18:00:52 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83491 00:30:28.992 18:00:52 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:28.992 18:00:52 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:28.992 killing process with pid 83491 00:30:28.992 18:00:52 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83491' 00:30:28.992 18:00:52 ftl -- common/autotest_common.sh@973 -- # kill 83491 00:30:28.992 18:00:52 ftl -- common/autotest_common.sh@978 -- # wait 83491 00:30:30.467 18:00:53 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:30.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:30.467 Waiting for block devices as requested 00:30:30.726 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:30.726 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:30.726 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:30.986 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:36.278 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:36.278 Remove shared memory files 00:30:36.278 18:00:59 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:36.278 18:00:59 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:36.278 18:00:59 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:36.278 18:00:59 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:36.278 18:00:59 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:36.278 18:00:59 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:36.278 18:00:59 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:36.278 00:30:36.278 real 12m15.124s 00:30:36.278 user 14m40.716s 00:30:36.278 sys 1m9.063s 00:30:36.278 18:00:59 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.278 18:00:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:36.278 ************************************ 00:30:36.278 END TEST ftl 00:30:36.278 ************************************ 00:30:36.278 18:00:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:36.278 18:00:59 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:36.278 18:00:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:36.278 18:00:59 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:30:36.278 18:00:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:36.278 18:00:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:36.278 18:00:59 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:30:36.278 18:00:59 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:30:36.278 18:00:59 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:30:36.278 18:00:59 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:30:36.278 18:00:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:36.278 18:00:59 -- common/autotest_common.sh@10 -- # set +x 00:30:36.278 18:00:59 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:30:36.278 18:00:59 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:30:36.278 18:00:59 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:30:36.278 18:00:59 -- common/autotest_common.sh@10 -- # set +x 00:30:37.663 INFO: APP EXITING 00:30:37.663 INFO: killing all VMs 00:30:37.663 INFO: killing vhost app 00:30:37.663 INFO: EXIT DONE 00:30:37.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:38.185 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:38.185 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:38.185 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:38.185 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:38.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:39.018 Cleaning 00:30:39.018 Removing: /var/run/dpdk/spdk0/config 00:30:39.018 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:39.018 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:39.018 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:39.018 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:39.018 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:39.018 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:39.018 Removing: /var/run/dpdk/spdk0 00:30:39.018 Removing: /var/run/dpdk/spdk_pid56956 00:30:39.018 Removing: /var/run/dpdk/spdk_pid57169 00:30:39.018 Removing: /var/run/dpdk/spdk_pid57387 00:30:39.018 Removing: /var/run/dpdk/spdk_pid57485 00:30:39.018 Removing: /var/run/dpdk/spdk_pid57530 00:30:39.018 Removing: /var/run/dpdk/spdk_pid57647 00:30:39.018 Removing: /var/run/dpdk/spdk_pid57665 00:30:39.018 Removing: /var/run/dpdk/spdk_pid57859 00:30:39.018 Removing: /var/run/dpdk/spdk_pid57957 00:30:39.018 Removing: /var/run/dpdk/spdk_pid58052 00:30:39.018 Removing: /var/run/dpdk/spdk_pid58159 00:30:39.018 Removing: /var/run/dpdk/spdk_pid58256 00:30:39.018 Removing: /var/run/dpdk/spdk_pid58301 00:30:39.018 Removing: /var/run/dpdk/spdk_pid58332 00:30:39.018 Removing: /var/run/dpdk/spdk_pid58408 00:30:39.018 Removing: /var/run/dpdk/spdk_pid58492 00:30:39.018 Removing: /var/run/dpdk/spdk_pid58928 00:30:39.018 Removing: /var/run/dpdk/spdk_pid58981 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59044 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59060 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59173 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59189 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59302 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59318 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59371 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59389 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59448 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59466 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59626 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59662 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59746 00:30:39.018 Removing: /var/run/dpdk/spdk_pid59923 00:30:39.018 Removing: /var/run/dpdk/spdk_pid60007 00:30:39.018 Removing: /var/run/dpdk/spdk_pid60044 00:30:39.018 Removing: /var/run/dpdk/spdk_pid60490 00:30:39.018 Removing: /var/run/dpdk/spdk_pid60600 00:30:39.018 Removing: /var/run/dpdk/spdk_pid60713 00:30:39.018 Removing: /var/run/dpdk/spdk_pid60777 00:30:39.018 Removing: /var/run/dpdk/spdk_pid60803 00:30:39.018 Removing: /var/run/dpdk/spdk_pid60881 00:30:39.018 Removing: /var/run/dpdk/spdk_pid61511 00:30:39.018 Removing: /var/run/dpdk/spdk_pid61553 00:30:39.018 Removing: /var/run/dpdk/spdk_pid62040 00:30:39.018 Removing: /var/run/dpdk/spdk_pid62139 00:30:39.018 Removing: /var/run/dpdk/spdk_pid62260 00:30:39.018 Removing: /var/run/dpdk/spdk_pid62313 00:30:39.018 Removing: /var/run/dpdk/spdk_pid62344 00:30:39.018 Removing: /var/run/dpdk/spdk_pid62364 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64207 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64344 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64348 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64360 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64404 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64408 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64420 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64466 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64470 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64482 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64521 00:30:39.018 Removing: /var/run/dpdk/spdk_pid64525 00:30:39.279 Removing: /var/run/dpdk/spdk_pid64537 00:30:39.279 Removing: /var/run/dpdk/spdk_pid65929 00:30:39.279 Removing: /var/run/dpdk/spdk_pid66032 00:30:39.279 Removing: /var/run/dpdk/spdk_pid67434 00:30:39.279 Removing: /var/run/dpdk/spdk_pid69177 00:30:39.279 Removing: /var/run/dpdk/spdk_pid69251 00:30:39.279 Removing: /var/run/dpdk/spdk_pid69326 00:30:39.279 Removing: /var/run/dpdk/spdk_pid69435 00:30:39.279 Removing: /var/run/dpdk/spdk_pid69522 00:30:39.279 Removing: /var/run/dpdk/spdk_pid69618 00:30:39.279 Removing: /var/run/dpdk/spdk_pid69687 00:30:39.279 Removing: /var/run/dpdk/spdk_pid69762 00:30:39.279 Removing: /var/run/dpdk/spdk_pid69866 00:30:39.279 Removing: /var/run/dpdk/spdk_pid69958 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70052 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70122 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70197 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70301 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70393 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70489 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70552 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70633 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70737 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70829 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70924 00:30:39.279 Removing: /var/run/dpdk/spdk_pid70993 00:30:39.279 Removing: /var/run/dpdk/spdk_pid71067 00:30:39.279 Removing: /var/run/dpdk/spdk_pid71141 00:30:39.279 Removing: /var/run/dpdk/spdk_pid71220 00:30:39.279 Removing: /var/run/dpdk/spdk_pid71319 00:30:39.279 Removing: /var/run/dpdk/spdk_pid71410 00:30:39.279 Removing: /var/run/dpdk/spdk_pid71503 00:30:39.279 Removing: /var/run/dpdk/spdk_pid71584 00:30:39.279 Removing: /var/run/dpdk/spdk_pid71658 00:30:39.279 Removing: /var/run/dpdk/spdk_pid71739 00:30:39.279 Removing: /var/run/dpdk/spdk_pid71808 00:30:39.279 Removing: /var/run/dpdk/spdk_pid71912 00:30:39.279 Removing: /var/run/dpdk/spdk_pid72008 00:30:39.279 Removing: /var/run/dpdk/spdk_pid72152 00:30:39.279 Removing: /var/run/dpdk/spdk_pid72436 00:30:39.279 Removing: /var/run/dpdk/spdk_pid72473 00:30:39.279 Removing: /var/run/dpdk/spdk_pid72932 00:30:39.279 Removing: /var/run/dpdk/spdk_pid73117 00:30:39.279 Removing: /var/run/dpdk/spdk_pid73218 00:30:39.279 Removing: /var/run/dpdk/spdk_pid73329 00:30:39.279 Removing: /var/run/dpdk/spdk_pid73381 00:30:39.279 Removing: /var/run/dpdk/spdk_pid73401 00:30:39.279 Removing: /var/run/dpdk/spdk_pid73706 00:30:39.279 Removing: /var/run/dpdk/spdk_pid73755 00:30:39.279 Removing: /var/run/dpdk/spdk_pid73833 00:30:39.279 Removing: /var/run/dpdk/spdk_pid74222 00:30:39.279 Removing: /var/run/dpdk/spdk_pid74367 00:30:39.279 Removing: /var/run/dpdk/spdk_pid75167 00:30:39.279 Removing: /var/run/dpdk/spdk_pid75305 00:30:39.279 Removing: /var/run/dpdk/spdk_pid75485 00:30:39.279 Removing: /var/run/dpdk/spdk_pid75593 00:30:39.279 Removing: /var/run/dpdk/spdk_pid75907 00:30:39.279 Removing: /var/run/dpdk/spdk_pid76155 00:30:39.279 Removing: /var/run/dpdk/spdk_pid76502 00:30:39.279 Removing: /var/run/dpdk/spdk_pid76686 00:30:39.279 Removing: /var/run/dpdk/spdk_pid76827 00:30:39.279 Removing: /var/run/dpdk/spdk_pid76870 00:30:39.279 Removing: /var/run/dpdk/spdk_pid77024 00:30:39.279 Removing: /var/run/dpdk/spdk_pid77049 00:30:39.279 Removing: /var/run/dpdk/spdk_pid77102 00:30:39.279 Removing: /var/run/dpdk/spdk_pid77360 00:30:39.279 Removing: /var/run/dpdk/spdk_pid77587 00:30:39.279 Removing: /var/run/dpdk/spdk_pid78091 00:30:39.279 Removing: /var/run/dpdk/spdk_pid78742 00:30:39.279 Removing: /var/run/dpdk/spdk_pid79349 00:30:39.279 Removing: /var/run/dpdk/spdk_pid79891 00:30:39.279 Removing: /var/run/dpdk/spdk_pid80033 00:30:39.279 Removing: /var/run/dpdk/spdk_pid80120 00:30:39.279 Removing: /var/run/dpdk/spdk_pid80544 00:30:39.279 Removing: /var/run/dpdk/spdk_pid80603 00:30:39.279 Removing: /var/run/dpdk/spdk_pid81225 00:30:39.279 Removing: /var/run/dpdk/spdk_pid81680 00:30:39.279 Removing: /var/run/dpdk/spdk_pid82450 00:30:39.279 Removing: /var/run/dpdk/spdk_pid82583 00:30:39.279 Removing: /var/run/dpdk/spdk_pid82627 00:30:39.279 Removing: /var/run/dpdk/spdk_pid82696 00:30:39.279 Removing: /var/run/dpdk/spdk_pid82752 00:30:39.279 Removing: /var/run/dpdk/spdk_pid82819 00:30:39.279 Removing: /var/run/dpdk/spdk_pid83007 00:30:39.279 Removing: /var/run/dpdk/spdk_pid83101 00:30:39.279 Removing: /var/run/dpdk/spdk_pid83176 00:30:39.279 Removing: /var/run/dpdk/spdk_pid83243 00:30:39.279 Removing: /var/run/dpdk/spdk_pid83283 00:30:39.279 Removing: /var/run/dpdk/spdk_pid83350 00:30:39.279 Removing: /var/run/dpdk/spdk_pid83491 00:30:39.279 Clean 00:30:39.541 18:01:02 -- common/autotest_common.sh@1453 -- # return 0 00:30:39.541 18:01:02 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:30:39.541 18:01:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:39.541 18:01:02 -- common/autotest_common.sh@10 -- # set +x 00:30:39.541 18:01:02 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:30:39.541 18:01:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:39.541 18:01:02 -- common/autotest_common.sh@10 -- # set +x 00:30:39.541 18:01:02 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:39.541 18:01:02 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:39.541 18:01:02 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:39.541 18:01:02 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:30:39.541 18:01:02 -- spdk/autotest.sh@398 -- # hostname 00:30:39.541 18:01:02 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:39.541 geninfo: WARNING: invalid characters removed from testname! 00:31:06.126 18:01:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:08.042 18:01:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:10.594 18:01:33 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:12.510 18:01:35 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:13.894 18:01:37 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:15.808 18:01:39 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:17.722 18:01:40 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:17.722 18:01:40 -- spdk/autorun.sh@1 -- $ timing_finish 00:31:17.722 18:01:40 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:31:17.722 18:01:40 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:17.722 18:01:40 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:17.722 18:01:40 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:17.722 + [[ -n 5015 ]] 00:31:17.722 + sudo kill 5015 00:31:17.732 [Pipeline] } 00:31:17.750 [Pipeline] // timeout 00:31:17.756 [Pipeline] } 00:31:17.770 [Pipeline] // stage 00:31:17.775 [Pipeline] } 00:31:17.789 [Pipeline] // catchError 00:31:17.798 [Pipeline] stage 00:31:17.800 [Pipeline] { (Stop VM) 00:31:17.812 [Pipeline] sh 00:31:18.097 + vagrant halt 00:31:21.404 ==> default: Halting domain... 00:31:26.709 [Pipeline] sh 00:31:26.992 + vagrant destroy -f 00:31:29.543 ==> default: Removing domain... 00:31:30.127 [Pipeline] sh 00:31:30.506 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:31:30.533 [Pipeline] } 00:31:30.548 [Pipeline] // stage 00:31:30.553 [Pipeline] } 00:31:30.567 [Pipeline] // dir 00:31:30.572 [Pipeline] } 00:31:30.587 [Pipeline] // wrap 00:31:30.593 [Pipeline] } 00:31:30.606 [Pipeline] // catchError 00:31:30.616 [Pipeline] stage 00:31:30.619 [Pipeline] { (Epilogue) 00:31:30.632 [Pipeline] sh 00:31:30.920 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:36.209 [Pipeline] catchError 00:31:36.211 [Pipeline] { 00:31:36.222 [Pipeline] sh 00:31:36.506 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:36.506 Artifacts sizes are good 00:31:36.516 [Pipeline] } 00:31:36.529 [Pipeline] // catchError 00:31:36.539 [Pipeline] archiveArtifacts 00:31:36.546 Archiving artifacts 00:31:36.638 [Pipeline] cleanWs 00:31:36.648 [WS-CLEANUP] Deleting project workspace... 00:31:36.648 [WS-CLEANUP] Deferred wipeout is used... 00:31:36.653 [WS-CLEANUP] done 00:31:36.655 [Pipeline] } 00:31:36.667 [Pipeline] // stage 00:31:36.671 [Pipeline] } 00:31:36.680 [Pipeline] // node 00:31:36.683 [Pipeline] End of Pipeline 00:31:36.713 Finished: SUCCESS